Vol.4 No.2 2011

Research paper : ARGUS: Adaptive Recognition for General Use System (N. Otsu et al.)−77−Synthesiology - English edition Vol.4 No.2 (2011) invariant feature extraction theory, such a geometrical transformation that operates on the pattern function f and keeps category correspondence invariant (called an invariant transformation) is represented by an operator T(λ), under which the invariant feature values, and thus the corresponding invariant functionals x =φ[f] are pursued. φ[T(λ)f] − φ[f] = 0 (1)Using operator analysis based on Lie group theory, the invariant features, corresponding to the given invariant transformation, are found as elementary solutions of the partial differential equation, derived as a necessary and sufficient condition[1][2]. In this manner, the pattern, as the fundamental features for recognition through abstraction of extraneous information, can ideally be treated in unity as a single point x in the invariant feature vector space.2.2.2 Discriminant feature extraction (statistical aspect)However, actual patterns are subject to variations and noise and are distributed according to a probability distribution p(x|Cj) for each category class Cj. The next stage of the discriminant feature extraction theory considers a mapping y =Ψ(x) of the invariant feature vector x to a new feature vector y with reduced dimensions and derives an optimum mapping that optimizes an evaluation criterion for y, such as discrimination of the category classes. So-called multivariate analysis methods (such as discriminant analysis) are usually formulated as linear mappings, whereas a neural network or a kernel method is used for some type of nonlinear mapping.In fact, the ultimate optimum nonlinear discriminant mapping is easily obtained in the following formula using variational calculus[1][2]. y= ΨN(x) = ΣKj=1P(Cj|x)cj (2)This result shows that pattern discrimination is closely related to Bayes posterior probability P(Cj|x), and suggests the essential framework of Bayesian inference behind Fig. 4 Shift-invariance and frame-additivity[6]Shift-invariance (R1) Frame-additivity (R2)pattern recognition. Here, cj are the vectors that represent each category in the target mapping Y , and in the case of discriminant analysis, they are derived as eigenvectors of the between-class stochastic matrix in the original space X. It is understood that the dimensions of the optimum discriminant space obtained are essentially determined by the number of classes, therefore coming to K−1 dimensions.In real-life applications, it is necessary to make appropriate simplifications according to practical requirements considering these theoretical frameworks.3 Approach and conditions for a constructing methodWhile considering the approach toward a flexible vision system and a constructing method, the following three points can be mentioned as basic conditions that are required for the vision system (Fig.4).R1: Shift-invariance,R2: Frame-additivity,R3: Adaptive trainability.The results of such recognition or measurement should be the same regardless of where the recognition or measured object is in the image frame. Thus, the first condition R1 demands that a feature x extracted from the pattern does not depend on the position of the object (it is invariant under a parallel shift). Size scaling, rotation, and other transformations can also be considered as invariant transformations. However, since a parallel shift is the most fundamental, it was made a required condition.The next condition R2 requires that features for the entire screen are the sum of local features for individual objects. This is also a consequence of R1, and is a required condition where feature representation is a convenient representation (linear) for recognition (especially counting), and the processing afterward becomes simple and high speed.Unlike the ordinary method where feature extraction is given as a heuristic procedure and the construction method changes in accordance with the change in recognition tasks, the last condition R3 requires that a new feature y suited to the task is automatically constructed (synthesized) from the initial feature x in an optimal manner using the learning acquired from the example; in addition, the condition requires that themethod is a general-purpose formulation that is adaptively optimized with a structure indifferent to changes in the task.In addition, for such a feature extraction method constructed to meet these required conditions, it is desirable for the computation amount to be low and that real-time processing is possible.x = xx1x2x = x1+x2


page 6