We have the option to take advantage of the static background of the scene. We can assume that it is only the ball that is moving in the scene. As a result, we can compute mean background and threshold the absolute difference of the mean background from the actual image. The difference is thresholded using two thresholds. The classification of every pixel follows the rules
$difference \leq threshold_{low} \dots class = -1$
$threshold_{low} < difference \leq threshold_{high} \dots class = 0$
$difference > threshold_{low} \dots class = 1$
Each row and column of the image of classified pixels is evaluated with the sum of classes of its elements in the next step. These sums are filtered by a convolution kernel that is generated as an orthogonal projection of circle to a line. The projected circle has the same size as the ball in the image. Finally, the horizontal and vertical coordinate of the ball are determined as points of maximal value of filtered sums of columns and rows, respectively.
![]() |
Pixel classifications together with the sums of columns and rows (classification color map is red=1, green=0 and blue=-1) |
When we compare the estimated position of the center of the ball with the real position, it is clear that the estimation is quite biased. The bias is caused by strong reflections on the surface of the ball as well as by the shades that are thrown by the ball onto the surface of the magnetic platform.
![]() |
Comparison of estimated and real position of the center of the ball |
The tests were done with subsampled image frames acquired with monochrome Camera Link camera Basler acA2000-340km.
No comments:
Post a Comment