Data AI
Epoch

Data

Which dataset do you want to use?

Features

Which properties do you want to feed in?

Click anywhere to edit.
Weight/Bias is 0.2.
This is the output from one neuron. Hover to see it larger.
The outputs are mixed with varying weights, shown by the thickness of the lines.

Output

Test loss
Training loss
Colors shows data, neuron and weight values.

Likelihood


Probability is the measure of the likelihood that an event will occur. It is quantified as a number between 0 and 1, where 0 indicates impossibility and 1 indicates certainty. A simple example is the tossing of a coin that has two sides: head and tail. We can describe the probability of this event in terms of the observed outcomes or the expected results.

Flip the Coin
Flip 100 times

For a "fair" coin, the probability of head equals the probability of tail. However, for an "unfair" or "weighted" coin the two outcomes are not equally likely. Change the "weight" of the coin by dragging and dropping the expected probability and see how this affects the observed outcomes.

Expectation


The expected value of an experiment is the probability-weighted average of all possible values. It is defined mathematically as the following:

$$E[X] = \sum_{x \in X}xP(x)$$

The law of large numbers states that the average result from a series of trials will converge to the expected value. Roll the die to see convergence to its expected value.

Roll the Die
Roll 100 times

Change the theoretical probability of the die to see how that changes the average and expected value.

Estimation


One of the main goals of statistics is to estimate unknown parameters. An estimator uses measurements and properties of expectation to approximate these parameters. To illustrate this idea we will estimate the value of pi, \( \pi \) by randomly dropping samples on a square that has a circle inscribed in it. We will define the following estimator \( \hat{\pi} \), where \( m \) is the number of samples within our circle and \( n \) is the total number of samples dropped.

\(\hat{\pi} = 4\dfrac{m}{n}\) \( m= \) 0.00
\( n= \) 0.00
\( \hat{\pi}= \)     
Drop 100 Samples
Drop 1000 Samples

An estimator's accuracy and precision is quantified by the following properties:

The bias of an estimator is the difference between the estimator's expected value and the true value of the parameter being estimated. It informally measures how accurate an estimator is. For an estimator \( \theta \), bias is defined mathematically as:

$$B(\hat{\theta}) = E(\hat{\theta}) - \theta$$

In our example, \( \hat{\pi} \) is unbiased, which means its bias is 0.

Variance is the expectation of the squared deviation of an estimator from its expected value. It informally measures how precise an estimator is. For an estimator \( \theta \), this is defined mathematically as:

$$var(\hat{\theta}) = E[(\hat{\theta} - E(\hat{\theta}))^2]$$

In our example, the variance of our estimator \( \hat{\pi} \) is

Mean squared error (MSE) of an estimator is the sum of the estimator's variance and bias squared. For an estimator \( \theta \), this is defined mathematically as:

$$MSE(\hat{\theta}) = var(\hat{\theta}) + B(\hat{\theta})^2$$

In our example, the mean squared error of our estimator \( \hat{\pi} \) is