Vol.2 No.2 2009
31/98
Research paper : A secure and reliable next generation mobility (Y. Satoh et al.)−111−Synthesiology - English edition Vol.2 No.2 (2009) To estimate the camera-head pose parameters, the standard method is to use an acceleration sensor and gyroscope, but this is a relative position estimate and cumulative error is a problem. Therefore, we developed and implemented a method that allows absolute estimate of the pose of the camera-head from an omni-directional image in high speed [11]. Specifically, all edges (segments with highest brightness gradient) are extracted from the omni-directional image. Next, the directions of edges are plotted onto the voting space, and two large peaks can be obtained. This is because there are many vertical and horizontal edges in our daily environment. For example, tabletops and boundaries between the ceiling and wall have horizontal edges, while the pillars and support of the bookshelves have vertical edges. The position at which these peaks appear in the voting space presents the relative pose of the camera-head. Of course, estimation will go wrong if there is a forest of trees that grow diagonally, but the error can be detected by also using a gyroscope and an acceleration sensor.For this method, by using look-up tables as much as possible, correction of the coordinate system and estimation of position can be accomplished in about 10 ms. Figure 6 shows an example of an image correction by actually estimating the pose of the camera-head. The upper image is an image before correction, and since the camera-head is mounted on the electric chair at a tilted position, the image seems to be distorted. The lower image is an image after conducting geometric conversion using estimated camera-head pose parameters against the same data as the one in the upper image. 3.5 Risk detectionFigure 7 shows an example where the omni-directional distance information obtained by the stereo omni-directional camera has been visualized. The coordinate conversion is done for the distance information obtained from each stereo camera unit arranged on the regular dodecahedron, and the information is mapped onto an integrated coordinate system with the center of the camera-head as the origin. Figure 7 is an observation of the same data shot in one shot from three virtual viewpoints. Using the stereo omni-directional camera, such omni-directional distance information can be obtained 15 frames/sec (angle resolution of 360/512 degrees = approx. 0.7 degrees; about 300,000 points are shot at once). The risk detection in the environment for electric wheelchairs is conducted by directly using this omni-directional distance information. The detailed algorithm for risk detection is described in the referenced paper[11]. Basically, when the height of the floor is set at 0, all obstacles that are within the range of -0.5 m (lower than the floor) and 1.6 m high are detected. Whether the detected obstacle will be barrier to the wheelchair depends on the direction in which the electric wheelchair is moving. By setting the decision area that is switched according to the direction of the joystick, as shown in Fig. 8, when the obstacle enters the deceleration/stop area, the chair automatically decelerates or stops. In the experiment for this paper, the diameters of the decision area were 1.2 m (for deceleration) and 0.4 m (for stop). In forward straight (F0), the decision area is rectangular to allow passage through narrow corridors. In F+1~F+2 where the wheelchair turns while moving forward, it is expected that the amount of turn will change continuously according to the user’s joystick maneuver, so a fan-shaped decision area is set to handle probabilistic spread. In the case of F+2 where the amount of turn becomes greater, collision at an inner radius of the direction of the turn and collision at an outer side of the turn must be considered in addition to the obstacles in the forward front direction. Therefore, the decision area is widened in the inner side of the turn, and a stop area is set in the outer side of the turn which is the opposite of Fig. 6 Estimation of camerahead position and tilt correction.The position of the camerahead is estimated from the vertical and horizontal edges of the omni-directional image. The upper image is the one before correction. Since it is omni-directional, the effect of the tilt appears in sine curve form. The black object in the middle is the electric wheelchair. The lower image is the image corrected according to estimated parameters. It is corrected so the sideway direction corresponds to the horizontal, and up-down direction corresponds to the vertical.Fig. 7 Omni-directional distance information.Same data is seen from three virtual viewpoints. There are about 300,000 observation points. Three-dimensional data for all directions can be obtained 15 times in 1 second.Estimation of camerahead positionTilt correction algorithm
元のページ