# Day 93(DL) — Kalman Filter(one of the fundamentals of object tracking)

So far we’ve discussed diverse object detection architectures and the next step is to track the detected objects. Object tracking is one of the evolving fields in computer vision, finding applications in varied domains including self-driving cars, monitoring traffic congestion and tracking bird migration. But, before jumping on to the tracking process, it is essential to gain an in-depth understanding of the maths behind the Kalman filter and Hungarian algorithm. The focus of this post will be on the Kalman filter along with its derivation.

Basic Intuition of Kalman Filter: For instance, one of the wild explorers is on an adventure to study the behaviour of cheetahs. Based on the mixed dynamics(velocity, position), the next movement of the animal could be predicted. Note: It’s all based on the probabilities, as we cannot accurately tell the next move based on just the current one, as there might be various external factors(a prey might be) that influence the movement. This is the core idea behind the Kalman Filter. One of the noteworthy observations about the filter is its ability to forecast with only the previous time sequence. This property makes it more fitting for real-time applications.

Gaussian Distribution & Covariance Matrix: Since the velocity and position are continuous numbers, the filter considers both of these variables are randomly Gaussian distributed(future state). Please checkout the highlighted links for Gaussian distribution. The future state could be ’n’ number of possibilities(range of values) with a mean and a standard deviation(std). As we know, the mean represents the central value of the randomly distributed data and the std denotes how far the deviation is from the mean(which is unknown).

Some of the future points(combination of velocity and position) are highly probable when compared to the other. But only one of the combinations is the actual destination. If we observe carefully, we can always find some relation between the variables considered. In our case (velocity & position), if the animal is running at its maximum capacity, then the displacement will be highly significant else the other way round. The statistical measure that assists to determine the relationship between the variables is Covariance Matrix(link highlighted).

Kinematic formula & Prediction matrix: we know that velocity = distance travelled/time taken, altering a bit, distance travelled = velocity * time taken

new position = old position + distance travelled

Rewriting the above equation using the kinematic formula,

new_position(Pk) = old_position(Pk-1) + timetaken(delta t)* old_velocity(Vk-1)

new_velocity(Vk) = old_velocity(Vk-1)

representing using the matrix,

The new state Xk is denoted based on the old state Xk-1.

Xk = Fk * Xk-1 where Fk is the matrix that has the coefficients as mentioned above.

Matrix multiplication and transformation: Consider we have ’n’ number of points in distribution and every point undergoes the same transformation(change of basis — refer below video), it means the relation w.r.t one another is not altered. If we take two variables, that have a positive correlation, even after the transformation the correlation is maintained. Here, we refer to transformation as matrix multiplication. Since the correlation between two variables(position & velocity) is represented by the covariance matrix, it will remain the same. so we can rewrite the covariance matrix Pk in terms of matrix multiplication as,

Ck = Fk * Ck-1 * Trans(Fk) => Fk * Trans(Fk) = Identity matrix

Since each point in the previous gaussian distribution(Ck-1) undergoes the same transformation to give new gaussian distribution(Ck).

The external influencing factors will be continued in the subsequent post.

My learnings are from the below reference link, strongly recommend the readers to go through it to get visual insights.

AI Enthusiast | Blogger✍

## More from Nandhini N

AI Enthusiast | Blogger✍