Vol.2 No.2 2009
29/98

Research paper : A secure and reliable next generation mobility (Y. Satoh et al.)−109−Synthesiology - English edition Vol.2 No.2 (2009) Figure 3 shows the exterior view of the stereo omni-directional camera. Table 1 shows its major specifications. The collection of the individual cameras is called the camera-head, and its basic form is a regular dodecahedron. To measure the 3D information, three cameras are installed in each of the plane of the dodecahedron. Since sufficient distance is necessary between the cameras to obtain parallax (called stereo baseline, 50 mm in SOS), increasing size of the camera-head was an issue. To solve this issue, as shown in Fig. 3 right, three cameras are mounted onto a T-shaped arm (this set is called the stereo camera unit). And by arranging them three-dimensionally so each plane of the dodecahedron crosses each other without blocking the views of the cameras, the stereo baseline of 50 mm is maintained, and the camera-head can be downsized to a diameter of 116 mm or the size of a fist. The total number of cameras is 3 cameras × 12 planes = 36 cameras. All cameras are synchronized so they will shoot images at exactly the same time.The images obtained from the camera-head are transferred to the personal computer (PC) via two optical fiber cables at 1.25 Gbps. On the PC, the 36 images are DMA-transferred to the main memory in the form where images are aligned straight, and the users are notified of the top address with a pointer. The transferred images can be accessed freely using this pointer.By conducting a preliminary experiment of actually mounting the device on the electric wheelchair, it was found that the vibration transferred to the camera-head was greater than expected, and we strengthened the attachment of the camera-head and changed the imaging device. The initial model employed the CMOS imaging device with a rolling shutter (the shutter is released for each operation line like in a camera tube; although the structure is simple, slight distortion is produced when there is motion because of the time difference at top and bottom of the image), but slight distortion occurred in the image due to severe vibration and affected the accuracy of the 3D measurement. Therefore, we employed a CMOS imaging device with a global shutter (the shutter is released simultaneously for an entire image), since a high-performance device became available.3.2 Stereo image processingThe distance can be calculated by the principle of triangulation from the parallax of the image shot by multiple cameras. This is like the human eyes that perceive distance using the parallax between the two eyes. Although simple in principle, there are two points that make the implementation difficult.(1) Calibration of the stereo camera: To accurately measure the distance, it is necessary to know the actual measurements of camera parameters such as focal length, lens center, and distortion, as well as the actual measurements of positional arrangements of the multiple cameras.(2) Search for corresponding points: Correspondence is found between points of high similarity among images shot by multiple cameras (that is, point assumed to be the same in the real world), and the distance between the corresponding points is the parallax. Objects that are near the camera have greater parallax while objects far away have smaller parallax. Since it is necessary to find correspondence for all pixels in an image, the processing cost is extremely high.For (1), in the stereo omni-directional camera, all parameters are obtained accurately during manufacturing using the general calibration method. The camera-head has a sturdy structure so no readjustments will be necessary after manufacturing. In fact, the camera-head has been mounted on the electric wheelchair for over three years, and it has not required readjustment to the present. For (2), we considered building hardware since the processing cost was extremely high. However, considering the rapid advancement in high-speed PC, we chose implementation by software. In fact, in about three years since the commencement of the project, the computation speed of stereo image processing increased about five times purely on account of improved PC performance. Since more speed was needed with the software for it to be implemented on a small wheelchair-mountable PC, about twofold acceleration was achieved by employing parallel computation and by thoroughly removing overlapping computations.Fig. 3 Stereo omni-directional system.The left photo shows the camerahead (diameter of 116 mm). Right photo is the stereo camera unit. The three cameras are arranged at right angles to each other in a single plane.50.0 mm50.0 mmAbout 9 W (12 V, 750 mA)Power consumptionAbout 480 g (camerahead and support)Weight116 mm (diameter of circumscribed circle)Camerahead diameter15 fps (30 fps when color image only)Frame rate50.0 mmStereo baseline length101°(H)×76°(V)Angle of view of each camera1.9 mmFocal length of each camera640 (H)×480 (V) pixelsDevice resolution1/4’CMOS (global shutter)Imaging deviceRegular dodecahedronBasic formTable 1 Major specifications of the stereo omni-directional system.

元のページ 

10秒後に元のページに移動します

※このページを正しく表示するにはFlashPlayer9以上が必要です