This is a visualization of how different machine learning algorithms learn decision boundaries between output categories for some different datasets. You can also experiment with how hyperparameter settings affect the learning.
Select Dataset ?
Set Hyperparameters ?
Visualizer contains six generated two-dimensional datasets that can be used to demonstrate how different machine learning
algorithms learn on different problems. Different algorithms generate decision boundaries (the boundaries separating each
class from the other) with different properties. For example, Linear Regression can only learn linear decision boundaries
while decision trees generate squares.
Visualizer can be used to demonstrate the concepts of underfitting and overfitting. Using Linear Regression, Naïve Bayes or
Neural Networks with few hidden units (try 4) on the first dataset (the spiral arms) results in low accuracy. The models are
not complex enough to learn the mapping between the input patterns and output patterns. Linear Regression can only learn the
last dataset properly since the others have classes that are more or less linearly inseparable.
Overfitting is easily demonstrated using the SVM (Support Vector Machine) algorithm. Try any dataset and a very high Gamma value
(>5000) and you will see that each training example is perfectly matched, but the algorithm is unlikely to have good generalization
The visualization shows the decision boundaries (the boundaries separating each class from the other) the algorithm has
learned on the dataset. For Linear Regression and Neural Networks, the progress on each training iteration is also shown.
The small circles show the examples in the dataset, and the colored areas show which class each point is predicted as
If the accuracy is low, examples are incorrectly classified. In this case you can easily see that the decision boundaries
don't separete the classes well enough.
Click ► to expand the Set Hyperparameters section.
The hyperparameters (configurations of the algorithm) are set so each algorithm can learn the dataset as good as possible.
You can change the hyperparameters to see how different configurations affect how the algorithm learn on the dataset.
About Web ML Demonstrator
Web ML Demonstrator is a machine learning demonstrator running purely on the client browser. All algorithms
have all the functionality of state-of-the-art implementations. The main purpose of this demonstrator is to be used as a
tool when teaching and explaining machine learning and machine learning related concepts.
Web ML Demonstrator is developed by Johan Hagelbäck, senior lecturer at Linnaeus University in Kalmar, Sweden. Contact details
for the developer is here.