Vol.4 No.2 2011
Research paper : ARGUS: Adaptive Recognition for General Use System (N. Otsu et al.)−86−Synthesiology - English edition Vol.4 No.2 (2011) are, and it is optimally constructed (using weights) by learning through examples for the multivariate data analysis technique, the statistical feature extraction of the latter stage. I have thus revised this area to make it a little easier to understand.5 Percentage of correct answers in pattern recognitionQuestion (Kanji Ueda)Why does the percentage of correct answers not achieve 100 %? Or, in what kind of cases is 100 % possible? Recognizing fully that your research gives superior results compared to other researchers and researches until date, this is a question designed to deepen the discussion of the research as a Synthesiology paper.Answer (Nobuyuki Otsu)Real-world patterns, e.g., for an “a/i” in audio, or a “dog/cat” in an image, have diverse variations and noise, and feature (observed) values from them, such as frequency or color, are generally distributed stochastically. Using this visualization, even when categories are discriminative,the bases of those distributions near to as much extent as possible and could overlap at their frontal boundaries. Therefore, it is normal if 100 % is not achieved even for learning samples. Of course, the more the valid features are extracted in large numbers and integrated, the more close to 100 % it will approach asymptotically, However,it is more realistic to keep the feature extraction down to a finite number from a cost standpoint. It is a question of cost effectiveness.If it is a simple identification problem, then apparently 100 % correct answer is possible. For example, in the identification and classification of 100 yen and 10 yen coins by using their feature values (e.g., diameter and weight), the fact that they are definitely different as a rule, allows for its implementation in vending machines (though sometimes there are erroneous recognitions). This paper has presented a scheme incorporating a general purpose approach aimed at more difficult, advanced recognition problems.6 Application examplesQuestion (Motoyuki Akamatsu)8 application examples were discussed in Chapter 5, and I understand that the argument as a general purpose system is based on such examples, However, in these examples, HLAC and CHLAC are the only ones that are common, and for discriminant feature extraction by multivariate data analysis, different techniques have been used, namely factor analysis, multiple regression analysis, discriminant analysis, k-NN classification, principal component analysis, AR model, and canonical correlation analysis among others. Although there are partial explanations such as what technique is optimum for each task, I look forward to an organized description of the basic thinking and theory behind the proper use of techniques depending on the task. I believe this would promote the understanding of the reader in terms of which technique to apply in solving their task.Answer (Nobuyuki Otsu)As you pointed out, I have used HLAC/CHLAC features as basic initial features (invariant features), and various multivariate data analyses for their optimum integration (linear weighted sum) corresponding to the task. As I think your concern is important in helping understand the readers who are unfamiliar with multivariate data analysis, I revised to include a correspondence table that had previously been omitted owing to limited space.