Subscribe / Unsubscribe Enewsletters | Login | Register

Pencil Banner

Machine learning for Java developers

Gregor Roth | Sept. 18, 2017
Set up a machine learning algorithm and develop your first prediction function in Java.

Labeled data sets are required for training and testing purposes only. After this phase is over, the machine learning algorithm works on unlabeled data instances. For instance, you could feed the prediction algorithm a new, unlabeled house record and it would automatically predict the expected house price based on training data.

 

How machines learn to predict

The challenge of supervised machine learning is to find the proper prediction function for a specific question. Mathematically, the challenge is to find the input-output function that takes the input variables x and returns the prediction value y. This hypothesis function (hθ) is the output of the training process. Often the hypothesis function is also called target or prediction function.

f1 hypothesis function

In most cases, x represents a multiple-data point. In our example, this could be a two-dimensional data point of an individual house defined by the house-size value and the number-of-rooms value. The array of these values is referred to as the feature vector. Given a concrete target function, the function can be used to make a prediction for each feature vector x. To predict the price of an individual house, you could call the target function by using the feature vector { 101.0, 3.0 } containing the house size and the number of rooms:


// target function h (which is the output of the learn process)
Function<Double[], Double> h = ...;

// set the feature vector with house size=101 and number-of-rooms=3
Double[] x = new Double[] { 101.0, 3.0 };

// and predicted the house price (label)
double y = h.apply(x);

In Listing 1, the array variable x value represents the feature vector of the house. The y value returned by the target function is the predicted house price.

The challenge of machine learning is to define a target function that will work as accurately as possible for unknown, unseen data instances. In machine learning, the target function (hθ) is sometimes called a model. This model is the result of the learning process.

machine learning fig1

Based on labeled training examples, the learning algorithm looks for structures or patterns in the training data. From these, it produces a model that generalize well from that data.

Typically, the learning process is explorative. In most cases, the process will be performed multiple times by using different variations of learning algorithms and configurations.

Eventually, all the models will be evaluated based on performance metrics, and the best one will be selected. That model will then be used to compute predictions for future unlabeled data instances.

 

Linear regression

To train a machine to think, the first step is to choose the learning algorithm you'll use. Linear regression is one of the simplest and most popular supervised learning algorithms. This algorithm assumes that the relationship between input features and the outputted label is linear. The generic linear regression function below returns the predicted value by summarizing each element of the feature vector multiplied by a theta parameter (θ). The theta parameters are used within the training process to adapt or "tune" the regression function based on the training data.

 

Previous Page  1  2  3  4  5  6  7  8  9  10  11  Next Page 

Sign up for Computerworld eNewsletters.