Gaussian Processes (GP) are powerful supervised learning methods, designed to solve classification and regression problems. One of the major advantages of using Gaussian processes is that it can estimate the uncertainty of its predictions by describing the probability distributions of the potentially infinite functions that fit the data. It can be defined as a stochastic process of random variables following a Gaussian distribution with a mean and a covariance function.
Görtler, et al., provides an excellent visual exploration of Gaussian Processes with mathematical intuition as well as a deeper understanding of how they work. In this article, we will cover some basic minimal concepts that help us set up a foundation for understanding Gaussian processes and extend it to assess its performance over regression problems.
A kernel (also called the covariance function) describes covariance, i.e, the joint variability of the Gaussian process random variables and is essential in setting up prior information on the GP distribution. These covariance functions make the core of the GP models. Radial basis function (RBF) kernel (also known as Gaussian kernel) is a popularly used covariance function in GP modelling.
\begin{align} K_{rbf}(\mathbf{x}_i, \mathbf{x}_j) &= \sigma^2 \exp\left(\frac{||\mathbf{x}_i - \mathbf{x}_j||_2^2}{2l^2}\right)\\ \end{align}There are a variety of kernels that can be used to model different desired shapes of the fitting functions. We also discuss two broad categories of kernels, stationary and non-stationary in Section ??, and also compare their performances on standard datasets. Following parameters in the kernel function play a significant role in the modelling of a GP:
In case of regression problems, these parameters are learnt using the training data by minimizing the following negative log marginal likelihood (nlml) function.
\begin{align} \log p(\mathbf{y}|X) &= -\frac{1}{2}y^T(K+\sigma^2_n I)^{-1}\mathbf{y} - \frac{1}{2}\log|K+\sigma^2_n I|-\frac{n}{2}\log2\pi\\ K &= \text{covariance_function}(X, X)\\ \sigma_n^2 &= \text{likelihood noise variance}\\ n &= \text{cardinality of } X \text{ or } \mathbf{y} \end{align}Now, let us visualize standatd (stationary) GPs applied on some standard datasets
Notice that noisy sine data is having uniform noise over the entire input region. We can also see that smoothness of sine function remains similar for any value of input $X$.
Now, we show the same model fit over a bit more comprex data.
There are two similarities between noisy sine curve dataset and noisy complex dataset: i) noise in data-points is uniform across $X$; ii) Underlying function that generates the dataset seems equally smooth (stationary) across $X$.
In real word, it is completely possible that datasets may not follow one or more of the above properties. Now, we will show the performance of stationary GPs on a real-world dataset
Olympic Marathon dataset includes gold medal times for Olympic Marathon since 1896 to 2020. One of the noticable point about this dataset is that, in 1904, Marathon was badly organised leading to very slow times.
Let us see how standard GP performs over this dataset.
From the above fit, we can see that data is more irregular or have higher noise till 1950, and after that, trend in data becomes more clear and narrow. In other words, Noise in data is in descresing order from left to right side of the plot. Predictive variance in the first fit is overestimated due to anomaly present in year 1904. Once, we adjust the observation at 1904 with another value, the fit produces reasonable predictive variance.
If we think of an ideal fit for the original dataset, it should have descreasing predictive variance and increasing smoothness in fitted function as year increases. Standard or stationary GPs are not well-equipped internally to deal with such datasets. Such datasets are known as non-stationary and now we discuss formaly discuss about stationarty and non-stationarity.
A definition of a stationairy process from Wikipedia is as the following,
The definition above also applies to space or any input space. Now, let us see what does it mean in context of Gaussian processes.
..
..
..