壓縮包已打包上傳。下載文件后為完整一套設計?!厩逦瑹o水印,可編輯】dwg后綴為cad圖紙,doc后綴為word格式,所見即所得。有疑問可以咨詢QQ 197216396 或 11970985
73
附錄:中文翻譯一
車輛速度檢測系統(tǒng)
Chomtip Pornpanomchai Kaweepap Kongkittisan
摘要
本研究通過使用圖像處理技術來設計一個用于檢測汽車速度的系統(tǒng)。主要的工作就是對系統(tǒng)的軟件開發(fā),其中需要一個視頻場景,該場景包括:行駛的汽車,開始參考點和結束參考點。該系統(tǒng)用來檢測車輛及參考點在場景中的位置和計算從檢測點開始每一副靜態(tài)圖像幀的速度。檢測車輛速度的視頻框架系統(tǒng)包括六大部分:1)圖像采集,從視屏場景中收集一系列單獨的圖片并儲存在臨時存儲器中。2)圖像增強,改善的一些單獨圖片的特征,以提供更高的精度和更好的效果。3)圖像分割,使用圖像微分來檢測車輛位置。4)圖像分析,使用門限技術來分析起始參考點和種子參考點的位置。5)速度檢測,借助于車輛的位置和參考點的位置來分析車輛在每一副圖像幀中的速度。6)報告,將傳達來的信息作為可讀信息發(fā)送到終端用戶。
該實驗以評估了三個特征:1)可用性,證明該系統(tǒng)可以在特定特定條件下檢測車輛的速度;2)性能;3)有效性。結果表明,系統(tǒng)在在分辨率為320 x240可達到最高的效率。它需要大約需要70秒來檢測在視頻場景中一個移動的車輛的速度。
關鍵字:汽車速度檢測,視頻幀差異
I. 簡介
用攝像機檢測車速的方法大大的改善了現(xiàn)行以雷達為主要設備的檢測方法。使用雷達設備檢測速度的方法已被應用的各個不同的行業(yè)中去,但不知何故,設備本身有一些缺點,如無論怎樣改進技術,只要設備仍然基于雷達檢測就不能固定下來。
雷達操作的方式被稱為多普勒頻移現(xiàn)象。我們每天都可能體驗得到。當車輛發(fā)出或者反射聲音時,多普勒現(xiàn)象就會發(fā)生。多普勒頻移會產生大量的音爆,當這些聲音返回聲波發(fā)生器時,聲音的頻率會改變??茖W家們使用這種變化來計算車輛的速度。然而,這種技術仍然有一些缺點,如設備的成本, 這是最重要的原因來尋找其他的補償設備,來減少企業(yè)的投資成本。圖像處理技術可以滿足這個要求。
圖像處理技術基于軟件基礎,而不需要特殊的硬件設備。通過一個典型的視頻記錄裝置和一個普通的電腦,我們就可以組建一個速度檢測裝置。通過使用基本科學速度理論, 我們可以從車輛經過的距離和時間,計算出車輛在視頻場景中的速度。
該項目中運用了一些圖形處理的關鍵技術。圖像微分已被運用到車輛速度檢測中,門限技術運用過程分割和區(qū)域填充來找出車輛的邊界。然而,該項目仍在處在原型階段,這就需要越來越多的研究和發(fā)展來克服系統(tǒng)限制和和提高軟件性,使其能夠成為實際的應用程序。
II. 文獻綜述
許多研究人員運用很多技術來檢測和測量車輛速度。所有的技術都基于硬件設備和電腦軟件,如下例:
2.1 電子硬件和計算機軟件
在韓國, Yong-Kul Ki等人使用使用雙環(huán)探測器和和Visual c++軟件測量車輛速度。Joel L, J. Pelegri, and Ryusuke Koide 等人提出使用磁傳感器結合計算機軟件來檢測車輛速度。Harry H. Cheng 等人用光學非干涉技術來檢測汽車速度。Z. Osman, 等人使用微波信號來檢測汽車速度。Jianxin Fang 等人使用連續(xù)波雷達探測、分類和度量車輛的速度。
2.2 圖像處理和計算機軟件
Huei-Yung Lin and Kun-Jhih li使用模糊圖像測量車輛速度。Shisong Zhu等提出了從交通視頻信號測量汽車速度。Bram Alefs and David Schreiber
運用AdaBoost算法和運動目標跟蹤法來檢測汽車的速度,S. Pumrin and D.J. Dailey 提出了一種自動化檢測速度的方法。
我們的系統(tǒng)將會使用錄像機記錄在一個視頻場景中的交通流量,之后,我們將使用距離計算車輛速度。
III. 方法
這一部分將介紹我們從視頻場景系統(tǒng)來檢測車輛速度的方法。我們要了解的是系統(tǒng)的總體結構,各個組成部分的結構,以及我們正在使用的技術。
3.1 車輛速度檢測系統(tǒng)結構
車輛速度檢測系統(tǒng)的硬件要求如圖2- 1(a)所示,該系統(tǒng)由標準的IBM / PC連接到非標定攝像機。系統(tǒng)的輸入必須是場景中行駛的車輛。我們必須知道場景中的距離測量結構,它包括起點和終點以及行駛的車輛,如圖2- 1(b)所示。該系統(tǒng)的基本思想是從車輛行駛的距離和車輛經過起止點時的時間,來計算車輛速度的。
圖2- 1 (a)輸入部分硬件組成 (b)視頻場景結構
3.2 車輛速度檢測系統(tǒng)結構圖表
為了提供車輛速度檢測系統(tǒng)的每一個操作細節(jié),我們將系統(tǒng)結構列為所示,我們將詳細說明每個工作模塊的構造。
如圖2- 2所示的基本結構,我們的系統(tǒng)可為6主要部分:(1)圖像采集(2)圖像增強(3)圖像分割(4)圖像分析(5)計算速度(6)報告。每個組件有以下細節(jié)。
圖像采集
圖像增強
圖像分割
圖像分析
速度分析
報告
從視頻場景中檢測車輛速度
顯示分割結果
繪制圖表
媒體檢索
圖像緩存
精度調整
灰階調整
幀間差分法
車輛速度搜索
查明車輛邊界
圖像閾值
車輛識別
車輛跟蹤
圖2- 2車輛速度檢測系統(tǒng)結構圖
3.2.1. 圖像采集
我們將Microsoft DirectShow作為接收工具,進而將數(shù)據(jù)輸送到系統(tǒng)中。Microsoft Direct Show 提供了一種叫過濾圖表管理器的技術,它可以將非格式化的視頻作為輸入。使用過濾圖管理器技術,我們就不需要擔心視頻的格式。過濾圖標管理器工作于設備驅動層,這樣就可以使多媒體數(shù)據(jù)經過媒體系統(tǒng)。過濾圖標管理器還提供了一個專門用于汽車平臺的多媒體過濾器。過濾圖管理器由三種過濾器組成,分別是源濾波器,譯碼器過濾和渲染過濾器。這3種過濾器作為低級媒體驅動來接受,處理和提供相同的數(shù)據(jù)格式到輸出級。我們的圖像采集部分負責過濾圖,從視頻中抓取單幀圖像,并且將它們存儲到記憶卡里。
3.2.2. 圖像增強
為了提高在下一階段中我們的圖像質量,我們嘗試了一些算法,比如降噪,圖像平滑等等。但實驗結果不是很好, 因為所有的這些方法都是要花費大量時間的。所以,我們放棄了對我們的分析影響不大的一些操作,剩下的兩個操作時圖像縮放和灰色標定。
圖像縮放的使用是為了能夠有各種大小的輸入格式。知道圖像的格式,有助于我們確定圖像的時間,這個時間將會被用來處理每一個圖像和輸出到顯示設備。
關于各種各樣的輸入格式,顏色是一個的關鍵因素,對系統(tǒng)有很大的影響。輸入格式中的圖像顏色有36000000種,這就意味著分析過程十分的困難。為了減少這些困難,我們將灰色標定用于這個過程中來。將有顏色的變成灰色圖片,意味著我們將拋棄了數(shù)以百萬的色澤度給拋棄了。我們可以將擁有36000000色澤的圖像轉化成色彩度為24級的圖像而不失真。
3.2.3. 圖像分割
對于該操作,我們正在討論對行駛車輛的圖像分割,為了從圖像序列中分出行駛的車輛,我們決定使用圖像微分的方法。對于圖像增強的來說,圖像序列中的所有圖片都必須經過它,這就意味著所有的這些圖片都是灰階圖。我們將灰階圖像中的第一幅圖作為參考幀。下一步是去掉圖像序列中我們選擇的參考點。取掉后的結果就是運動的二進制圖。我們確定車輛位置的方法是在垂直空間內找到最大 的區(qū)域。我們將垂直空間內最大的區(qū)域作為車輛的入口點。從已知的最新點開始,我們將使用區(qū)域增長法。區(qū)域增長法可以讓我們知道車輛的真實區(qū)域。車輛所在區(qū)域將作為車輛坐標存在記憶卡中。
3.2.4. 圖像分析
圖像分析用于在參考幀中尋找標記點的位置。我們從圖像增強中得到灰色標定,它將用于輸入過程。如圖1和圖2所示,標記點必須在陰影線上,這樣做有利于用圖像閾值法來從背景中得到標記點。在使用了圖像閾值法后,我們將會得到黑白二值圖圖像,它只包含兩個在白背景和的黑色標記點。在這個步驟中,黑白二值圖將會被翻轉并發(fā)送到圖像分割過程來尋找車輛的邊界。圖像分割的過程將會得到第一個標記點,因為分割過程會確定車輛坐標中垂直空間的最大區(qū)域。這樣,下一步要做的是去掉第一個標記點并填充圖像。新的圖像將會被送到圖像分割過程來尋找第二個標記點。當兩個標記點都從圖像分割得到時,將會確定起止點。整個分析過程的結果就是將會用于速度檢測中的起止點。
3.2.5. 速度檢測
在之前的過程中,我們已經得到了在圖像中車輛的位置以及參考幀中標記點的位置,在每張圖片中車輛的速度通過計算車輛連同參考點的位置和時間差來得到。計算完我們所取的過程后,最后一步就是求得車輛駛入和駛出兩個標記點的過程中的平均速度。圖片3更加直觀的解釋了該方法。
圖2- 3車輛速度檢測中所有變量示意圖
根據(jù)圖3計算車輛速度,我們列出了如下所示的方程:
車輛和起始點間的距離(千米):
距離 = D? * (D / Dx) * (Pn– P0)
車輛行駛時間(小時):
時間 = T? * (tn– t0)
車輛速度:
速度 = 距離 /時間 (千米每小時)
D 起止點間的實際距離(m)
Dx 起止點在圖像中的距離
X 圖片中場景的寬度
Y 圖像中場景的高度
P0 t=0時,車輛在圖片中的位置
Pn t=n時,車輛在圖片中的位置
t0 t=0時,記錄的時間點(ms)
tn t=n時,記錄的時間點(ms)
Df 將米轉換為千米的值
Tf 將毫秒轉換為小時后的值
3.2.6. 報告
報告是最后的過程,它給自終端用戶提供了可讀的計算結果。輸出的格式是文字或圖表,它顯示了車輛通過參考點時的速度。
IV. 實驗結果
在本節(jié)中,我們將用實驗結果證明在視頻場景中的速度檢測系統(tǒng)是可行的。首先,我們先提供一個實驗結果,它將演示如何運用我們的系統(tǒng)來獲取視屏場景中車輛的速度。第二部分是演示我們系統(tǒng)的精度。最后我們要做的是特性試驗,來說明這個速度檢測系統(tǒng)的是可行。
4.1 可行性證明
實驗開始分析窗口,屏幕上顯示的是一列由視頻中提取的圖像幀。這個實驗的輸入數(shù)據(jù)來自于一個無線嚴控的玩具車。
這個實驗開始于電腦軟件分析窗口。屏幕顯示的列表從視頻幀圖像場景。我們的輸入數(shù)據(jù)在這個實驗是無線電遙控玩具車,這從左向右移動一側的場景。圖4表示車輛在幀數(shù)為9時,當它第一次出現(xiàn)在現(xiàn)場,直到它到達結束標記點。對于每一幀,車輛已經檢測到位置連同框架時間點。這些信息加的位置開始點和結束點的標記,簡單手動計算可以為了找到車輛速度在每一幀。表1展示了視頻幀數(shù)與時間戳和汽車的速度每一幀。表的最后一行顯示了平均一輛汽車的速度。
圖2- 4第九幀時電腦畫面
4.2 有效性檢驗
場景的總結列表因素,我們認為它的原因影響系統(tǒng)的正確性,如圖像分辨率,尺寸大這個階段,有效性測試,將下完成相同的基礎設施的可用性證明。但是實驗的結果是改變了更多的關注結果的正確性。我們的實驗已經在不同的場景。我們已經創(chuàng)建了不同的測試小的車輛整體的每圖像幀,運動車輛表格 1以移動的玩具車為例,分辨率為640X480的試驗結果
的記錄的相機和復雜的背景。
表格 2使用移動車現(xiàn)場顯示結果
4.3 特性試驗
在這個階段,我們主要關注性能測試系統(tǒng)的執(zhí)行時間,同時執(zhí)行自動分析過程。用同樣的想法我們提出了在前面的小節(jié)中,測試過程將不同的情況下完成有關不同的因素,我們認為這是原因的性能的影響。為方便起見,我們有使用相同的因素和場景,用于測試的有效性。表2顯示了結果的性能測試。
V. 結論
根據(jù)前一節(jié)的介紹,我們的實驗已經達到3目標。第一目標是可用性測試,我們可以很明顯說系統(tǒng)可以完全符合這一目標。該系統(tǒng)是能夠檢測運動車輛的速度在一個視頻場景。我們的問題為了改善系統(tǒng)是剩下的2的目標,這是這個系統(tǒng)的性能和效率。
基于實驗結果是正確的,該系統(tǒng)是基于仍然太過形式的數(shù)據(jù)。在這一點上,我們分析了實驗結果為了考慮的重要因素,它會影響系統(tǒng)的正確性,如下所示。復雜的背景——我們已經定義
這個問題是我們的假設條件。但在現(xiàn)實生活中應用程序,視頻場景不能固定。該系統(tǒng)必須能夠處理任何類型的背景,甚至非靜態(tài)背景。
視頻場景的大小——的一個重要因素,對系統(tǒng)效果影響的因素,是視頻場景的大小。較大的圖像提供了更多的處理信息。
汽車的大小——關于我們確定車輛的位置的方法,我們正在使用形象差異化區(qū)分車輛從靜態(tài)背景。這是工作好只要大小的車輛不是太小。一個非常小的車輛可以導致系統(tǒng)無法區(qū)分車輛和噪聲。當這種情況發(fā)生時,檢測過程可以是錯誤的。
固定的特征標志點-標記點是計算過程中非常重要的。給了錯誤的特性標記點,系統(tǒng)可能無法識別正確的位置的標記點。
穩(wěn)定的亮度水平,處理視頻場景在不穩(wěn)定的亮度水平意味著我們工作在不同的圖像和背景在每一個采樣圖像。這樣做的結果可能是一個意想不到的錯誤的檢測過程。
輸入視頻的顏色數(shù)量——灰色定標過程是使用只是一個簡單的算法。這意味著灰色定標過程不是專為太許多顏色級別(比如1.6這種圖像)。
車輛的方向——基于實驗結果,我們試圖移動車輛扭轉方向。這樣做的結果是,檢測過程是考慮到負數(shù)的車輛速度。
在每個場景限制車輛的數(shù)量——我們已經提出這是限制的規(guī)范。該系統(tǒng)已經實現(xiàn),僅支持單一移動車輛在視頻場景。擁有一輛以上的車朝著同樣的場景
會導致系統(tǒng)提供一個錯誤的結果。
英文翻譯一
Vehicle Speed Detection System
Chomtip Pornpanomchai Kaweepap Kongkittisan
Abstract
This research intends to develop the vehicle speed detection system using image processing technique. Overall works are the software development of a system that requires a video scene, which consists of the following components: moving vehicle, starting reference point and ending reference point. The system is designed to detect the position of the moving vehicle in the scene and the position of the reference points and calculate the speed of each static image frame from the detected positions. The vehicle speed detection from a video frame system consists of six major components: 1) Image Acquisition, for collecting a series of single images from the video scene and storing them in the temporary storage. 2) Image Enhancement, to improve some characteristics of the single image in order to provide more accuracy and better future performance. 3) Image Segmentation, to perform the vehicle position detection using image differentiation. 4) Image Analysis, to analyze the position of the reference starting point and the reference ending point, using a threshold technique. 5) Speed Detection, to calculate the speed of each vehicle in the single image frame using the detection vehicle position and the reference point positions, and 6) Report, to convey the information to the end user as readable information.
The experimentation has been made in order to assess three qualities: 1) Usability, to prove that the system can determine vehicle speed under the specific conditions laid out.
2) Performance, and 3) Effectiveness. The results show that the system works with highest performance at resolution 320x240. It takes around 70 seconds to detect a moving vehicle in a video scene.
Keywords- Vehicle Speed Detection, Video Frame Differentiation.
I. INTRODUCTION
The idea of using the video camera to measure the vehicle speed has been proposed to improve the current speed detection approach, which is based too much on the radar equipment. The use of radar equipment to detect the speed has been widely spread into the different kinds of industries. But somehow the equipment itself has some disadvantages, which cannot be fixed no matter how the technology has been improved so far, as long as the equipment is still based on the radar approach.
The way the radar operates is known as Doppler shift phenomenon. We probably experience it daily. Doppler
shift occurs when sound is generated by, or reflected off of, a moving vehicle. Doppler shift in the extreme creates sonic booms and when those sound waves bounce back to the wave generator, the frequency of the sound will be changed, and scientists use that variation to calculate the speed of a moving vehicle. However, this technique still has some disadvantages such as the cost of equipment, which is the most important reason to find other compensating equipment that can reduce the cost of investment. Image processing technology can serve this requirement.
Image processing is the technology, which is based on the software component that does not require the special hardware. With a typical video recording device and a normal computer, we can create a speed detection device. By using the basic scientific velocity theory, we can calculate the speed of a moving vehicle in the video scene from the known distance and time, which the vehicle has moved beyond.
Few image processing key methodologies have been applied to this project. The image differentiation is used in the vehicle detection process, image thresholding for the segmentation process and region filling to find the vehicle boundaries. However, the project is still in the prototype mode, which requires more and more research and development in order to overcome the system limitation and enhance the performance of software to be able to perform to real-world application.
II. LITERATURE REVIEWS
Many researchers try to apply so many techniques to detecting and measuring vehicles speed. All techniques are based on the hardware equipment and computer software as follows:
2.1 Electronic Hardware & Computer Software
Yong-Kul Ki et al.[1] used double-loop detectors hardware and Visual C++ software to measure vehicles speed in Korea. Joel L. et al.[2], J. Pelegri, et al.[3] and Ryusuke Koide et al.[4] proposed magnetic sensors combined with computer software to detecting vehicles speed. Harry H. Cheng et al.[5] used Laser-based non- intrusive detection system to measure vehicles speed in a real-time. Z. Osman, et al.[6] applied microwave signal to detect vehicles speed. Jianxin Fang et al. [7] used continuous-wave radar to detect, classify and measure speed of the vehicles.
2.2 Image Processing & Computer Software
Huei-Yung Lin and Kun-Jhih li [8] used blur images to measure the vehicles speed. Shisong Zhu et al. [9] proposed car speed measurement from the traffic video signal. Bram Alefs and David Schreiber [10] applied the AdaBoost detection and Lucas Kanade template matching techniques to measuring vehicles speed. S. Pumrin and D.J. Dailey [11] presented a methodology for automated speed measurement.
Our system will use a video recorder to record traffic in a video scene. After that we will use a distance measurement to calculate a vehicle speed.
III. METHODOLOGY
This part will introduce our approach to create vehicle speed detection from a video scene system. We will start with the overall framework of the system and the description of each component in the framework and the basic understanding of the technique we are using in each component.
3.1 Overview of Vehicle Speed Detection Framework
The hardware requirement for the vehicle speed detection system is shown in Figure 1(a). The system consists of the normal IBM/PC connected to the un- calibrated camera. Our input of the system must be the scene of a moving vehicle. The scenes have to be known of distance frame, which consists of the starting point and end point and the moving vehicle as displayed in Figure
1(b). Basic idea of the system is to calculate the vehicle speed from known distance and time when the vehicle first passes starting point and the time the vehicle finally reaches end point.
3.2 Vehicle Speed Detection System Structure Chart
To provide a deeper understanding of the details in each operation of the vehicle speed detection system, we firstly introduce the structure of the system as shown in Figure 2. And we will then elaborate on how each working module is constructed.
Based on the structure chart in Figure 2, our system consists of 6 major components, which are 1) Image Acquisition, 2) Image Enhancement, 3) Image Segmentation, 4) Image Analysis, 5) Speed Calculation, and 6) Report. Each component has the following details.
Figure 2. Structure chart of vehicle speed detection system
3.2.1 Image Acquisition
We have decided to use Microsoft Direct Show library as our tool to receive the input to the system. Microsoft Direct Show provides a technology called Filter Graph Manager, which performs as an unformatted-based video streaming input. Using the Filter Graph Manager, we have no need to worry about the format or the source of the media. Filter Graph performs at the device driver level in order to stream multimedia data through the media system. The filter graph provides a structure for multimedia filters used specifically by the automotive platform. The filter graph is constructed of 3 filter types, which are source filter, decoder filter and the render filter. Those 3 filters perform as low level media driver to receive, process and provide the same data format for all media to the output level. Our Image Acquisition component is in charge of calling the filter graph, grabbing the single frame from the video stream, and buffering each single image to the memory storage.
3.2.2 Image Enhancement
We first experimented with a couple of algorithms in order to improve our image quality to process in the next steps, such as noise reduction, image smoothing and so on. But the experimental result came out not very well, because all those methodologies were time consuming. So we have cut off some operations, which are not useful to our analyzing process. The remaining are 2 operations, which are Image Scaling and Gray Scaling.
Image Scaling is used in order to provide the possibility of having the various sizes of input formats. Understanding the format of the images helps us to determine the time that will be used to process each single image and display to the output device.
Regarding to the variety of input format, color is one of the key factors, which have a great impact on the system. The image color in each input format can be up to36 millions colors and that means difficulty of the analyzing process. To reduce this difficulty, Gray-Scaling has been brought to the process. Making the colored image to be the gray level image means that we have cut off the number of million levels of colors. Images with 36 million-color levels can be transformed into 24 levels of colors without losing the abstraction.
3.2.3 Image Segmentation
For this operation, we are talking about image segmentation for the moving vehicle. To segment the moving vehicle from the images sequence, we have decided to use the image differentiation approach. Regarding the image enhancement process, all images in the image sequences must pass through the image enhancement, which means all that those images are the gray-scaled images. The first image from gray-scaled image sequences has been selected as the reference frame. Next step is to subtract all images in the sequences with the reference frame we have chosen. The result of subtraction gives us that there are the movements in the binary image form. Our approach to determine the vehicle position is to find the biggest area in the vertical space. We are declaring the biggest area in vertical as the prospective vehicle entry point. From the newly discovered entry point, the region-growing method has been applied. The region-growing method gives us the area of the real vehicle. The area of vehicle will be saved into the memory data structure called vehicle coordinate.
3.2.4 Image Analysis
Image Analysis process is responsible for finding the position of mark-points in the reference frame. The gray- scaled reference frame, which has been received from the image enhancement process, is used as the input of this process. Refer to the framework in Figure 1(a) and Figure
1(b), the mark-point must be in the dark shade line, so that the image thresholding method can be used to distinguish the mark-point from the background. After the thresholding process has been applied to the reference frame, we will have the binary image containing only two mark-points in the black color with white background. The binary image in this step will be inverted and sent to the image segmentation process to find the boundary of the vehicle itself. The result of the segmentation process
will be the 1st mark-point, because the segmentation will determine the biggest area in the vertical space as the vehicle coordinate. So the next step that needs to be performed is to populate the new image without the 1st mark-point. The newly populated image will be sent to the image segmentation process to find the 2nd mark-point. When both mark-point positions have been received from
the image segmentation process, the process will decide which is the starting point and end point. The result of the process will be the position of starting point and ending point, which will be used in the speed detection.
3.2.5 Speed Detection
From the previous processes, which have already provided us the position of each single vehicle in the image frame and also the position of mark points found in the reference frame. The speed of the vehicle in each image will be calculated using the position of the vehicle together with position of reference points and the given time stamp. From each calculation that we have proceeded, the summary will be made as almost the final step to give us the average speed of the vehicle since it first appears between the 2 mark points until it moves out of the range. Figure 3 shows more a visual explanation on the algorithm for finding the vehicle speed.
Figure 3. Diagram displaying all the variables used in the speed detection process
Based on the diagram to calculate vehicle speed in Figure 3, we can write the equations to find our vehicle speed as shown below.
Distance between vehicle and starting point measured in kilometer
Distance = D? * (D / D x ) * (P n – P 0 ) …(1)
Time that vehicle spent in order to move to P n in unit of hour
Time = T? * (t n – t 0 ) ... (2)
Vehicle speed measured in format of kilometer per hour
Speed = Distance / Time (Kilometer per Hour) … (3)
WhereD is the real distance between two marking points(start point and end point) measured in meter
D x is the distance between two marking points measured in pixels
X is the width of the video scene measured in pixels
Y is the height of the video scene measured in pixels
P 0 is the right most of the vehicle position at time t = 0 measured in unit of pixels
P n is the right most of the vehicle position at time t = n measured in unit of pixels
t 0 is the tickler (timestamp) saved at time t = 0 measured in unit of milliseconds
(1.00/(1000.00*60.00*60.00))
T? is the time conversion factor. In this case, the conversion is from millisecond to hour, which is (1.00/1000.00
3.2.6 Report
Report process is the last process, which provides the end-user readable result of calculation. The format of output can be either the text description or the chart displaying the speed of the vehicle when it passes the mark point.
IV. EXPERIMENTAL RESULT
In this section, the experimentation result will be presented in order to prove whether vehicle speed detection from a video scene system is applicable. We first present the experimentation result, which demonstrates how to use our system to capture the spe