Shadow detection and removal, as well as vehicle occlusion are recurrent difficulties. Motion detection is a very important task as it is one major feature of traffic monitoring applica- tions: cars are moving. There are several techniques to segment vehicles from the background of a scene picture or of a video frame. The most widespread techniques involve the calculation of the optical flow. In optical flow, we are interested in finding vector fields that describe the way that the image is changing. In other words, we try to detect where certain parts of the image have moved to in the next frame. Such techniques usually have two steps: i) to detect feature points, and then ii) to track them over the frames. There are several ways of achieving this, some of which are described in [14] and [15].Feature points are special points that are distinguishable from their neighbors. Each algorithm may use different strate- gies to detect and track these points. One alternative iscalculating the difference between two consecutive frames. Pixels for which difference is bigger than a threshold are considered to be moving. However, this technique does not detect the direction of movement and can only be used for simpler applications. Additionally, this algorithm suffers from the aperture problem. This problem happens when a surface with a smooth color moves. As the color is similar in all parts of the surface, movement may not be detected in areas of the captured image where the surface was already present because the difference is not sufficient to consider it a motion and thus, it is interpreted as image noise or small variations in illumination. This problem is demonstrated in Fig. 1. It has two consecutive frames with the same region highlighted. In Fig. 1c, the same region in the two frames is compared and augmented. Although they display a different part of a moving vehicle, focusing on smaller portions only is not enough to perceive any difference.A different approach is used by background subtraction methods, which are also able to detect moving objects. One important premise in the method is that the background will have small changes over time and foreground objects such as cars are moving objects which will contrast with the background. Therefore, if foreground objects are detected it is safe to consider them as moving objects. Ideally, for traffic monitoring use, this background image would be the same frame without moving objects, showing the scene as if there were no cars or other moving objects on it. There are some alternatives to accomplish this.The easiest method is choosing a fixed image that is the background image. After subtracting one image to another, those pixels whose difference is bigger than a threshold are considered to be moving pixels. This is usually too simplistic as changing illumination conditions, weather, noise or small movements of the camera may cause big differences when there are no moving objects in those regions.Better options involve making a background estimation image, based on the frames captured by the camera. There are several methods to estimate the background and foreground of a sequence of frames, of which [16] presents an interesting overview. The main difficulties of these algorithms happen in situations with very slow traffic movement or temporarily stopped vehicles as they start blending with the conceptual background. A solution to this problem is suggested in [5], as well as is an alternative proposed in [3] to use time-varying background and foreground intrinsic images. Both studies achieved very good results in terms of successfully segmenting background and foreground. Fig. 2 shows the results of a background estimation algorithm after running for around 30 seconds. It has a background estimation which is very close to the real background and was able to successfully identify moving objects without any extra processing or calculation.Vehicle segmentation is probably the most frequent type of information that is extracted from traffic surveillance videos. Its objective is to distinguish between vehicles from the background and also vehicles from each other. One common method to detect vehicles in a video sequence uses one of the previously analyzed motion detection or background estimation to detect the cars. A solution that compares the current frame with the background estimation using a quad- tree decomposition to find the pixels where cars are found is presented in [12]. As stated in its experimental results,