991 resultados para Mutual recognition


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Autor książki: Wielka szachownica. Cele polityki amerykańskiej (Warszawa 1998) postrzega stosunki wzajemne pośród uczestników areny międzynarodowej jako wielką szachownicę, na której toczy się pomiędzy nimi „Wielka Polityka” (gra). Sytuacja światowa rozgrywa się, według niego, na jednej szachownicy (arenie międzynarodowej), a uczestnicy „gry” zajmują pozycje pionków. Jest to, więc swoista partia szachów, gdzie silniejszy zdobywa prestiż, pieniądze i władzę, natomiast słabszy przegrywa wszystko, osiągając marginalne znaczenie na globalnej szachownicy. Z kolei J. Nye eksponuje trzy płaszczyzny tej samej szachownicy, a mianowicie: potęgę militarną, gospodarczą oraz „miękkie środki oddziaływania politycznego”, wokół których toczy się polityka międzynarodowa. Jego trylogia poświęcona postrzeganiu potęgi państw powinna być lekturą obowiązkową, skierowaną przede wszystkim do polityków i mężów stanu, z przesłaniem, aby wskazane przez autora czynniki siły stosowali w praktyce, co pomoże im lepiej władać państwem. Jest to także książka przeznaczona dla wszystkich zainteresowanych polityką i jej zagadnieniami związanymi z percepcją potęgi. W niniejszej rozprawie naukowej skoncentrowałam się na trzech najważniejszych książkach J. Nye’a, stanowiących analizę atrybutów potęgi i wyjaśniających jej znaczenie. Są to: Bound to Lead. The Changing Nature of American Power (New York 1991), The Paradox of American Power. Why the World’s Only Superpower Can’t Go It Alone (Oxford 2002) oraz Soft Power. The Means to Success in World Politics (New York 2004; wyd. polskie: Soft Power. Jak osiągnąć sukces w polityce światowej, Warszawa 2007). Stanowią one podstawę do zrozumienia percepcji pojęcia potęgi Stanów Zjednoczonych. Choć autora zajmują także kwestie innych państw, to jednak właśnie mocarstwowość USA jako najpotężniejszego kraju na świecie, posiadającego wszelkie czynniki wzmacniające jego potęgę (władzę), stanowi podstawę rozważań J. Nye’a.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The present work examines the beginnings of ancient hermeneutics. More specifically, it discusses the connection between the rise of the practice of allegoresis, on the one hand, and the emergence of the first theory of figurative language, on the other. Thus, this book investigates the specific historical and cultural circumstances that enabled the ancient Greeks not only to discover the possibility of allegorical interpretation, but also to treat figurative language as a philosophical problem. By posing difficulties in understanding the enigmatic sense of various esoteric doctrines, poems, oracles and riddles, figurative language created the context for theoretical reflection on the meaning of these “messages”. Hence, ancient interpreters began to ponder over the nature and functions of figurative (“enigmatic”) language as well as over the techniques of its proper use and interpretation. Although the practice of allegorical interpretation was closely linked to the development of the whole of ancient philosophy, the present work covers only the period from the 6th to the 4th century B.C. It concentrates, then, on the philosophical and cultural consequences of allegoresis in the classical age. The main thesis advocated here has it that the ancient Greeks were in-clined to regard allegory as a cognitive problem rather than merely as a stylistic or a literary one. When searching for the hidden meanings of various esoteric doc-trines, poems, oracles and riddles, ancient interpreters of these “messages” assumed allegory to be the only tool suitable for articulating certain matters. In other words, it was their belief that the use of figurative language resulted from the necessity of expressing things that were otherwise inexpressible. The present work has been organized in the following manner. The first part contains historical and philological discussions that provide the point of departure for more philosophical considerations. This part consists of two introductory chapters. Chapter one situates the practice of allegorical interpretation at the borderline of two different traditions: the rhetorical-grammatical and the hermeneutical. In order to clearly differentiate between the two, chapter one distinguishes between allegory and allegoresis, on the one hand, and allegoresis and exegesis, on the other. While pointing to the conventionality (and even arbitrariness) of such distinctions, the chapter argues, nevertheless, for their heuristic usefulness. The remaining part of chapter one focuses on a historical and philological reconstruction of the most important conceptual tools of ancient hermeneutics. Discussing the semantics of such terms as allēgoría, hypónoia, ainigma and symbolon proves important for at least two crucial reasons. Firstly, it reveals the mutual affinity between allegoresis and divination, i.e., practices that are inherently connected with the need to discover the latent meaning of the “message” in question (whether poem or oracle). Secondly, these philological analyses bring to light the specificity of the ancient understanding of such concepts as allegory or symbol. It goes without saying that antiquity employed these terms in a manner quite disparate from modernity. Chapter one concludes with a discussion of ancient views on the cognitive value of figurative (“enigmatic”) language. Chapter two focuses on the role that allegoresis played in the process of transforming mythos into logos. It is suggested here that it was the practice of allegorical interpretation that made it possible to preserve the traditional myths as an important point of reference for the whole of ancient philosophy. Thus, chapter two argues that the existence of a clear opposition between mythos into logos in Preplatonic philosophy is highly questionable in light of the indisputable fact that the Presocratics, Sophists and Cynics were profoundly convinced about the cognitive value of mythos (this conviction was also shared by Plato and Aristotle, but their attitude towards myth was more complex). Consequently, chapter two argues that in Preplatonic philosophy, myth played a function analogous to the concepts discussed in chapter one (i.e., hidden meanings, enigmas and symbols), for in all these cases, ancient interpreters found tools for conveying issues that were otherwise difficult to convey. Chapter two concludes with a classification of various types of allegoresis. Whilst chapters one and two serve as a historical and philological introduction, the second part of this book concentrates on the close relationship between the development of allegoresis, on the one hand, and the flowering of philosophy, on the other. Thus, chapter three discusses the crucial role that allegorical interpretation came to play in Preplatonic philosophy, chapter four deals with Plato’s highly complex and ambivalent attitude to allegoresis, and chapter five has been devoted to Aristotle’s original approach to the practice of allegorical interpretation. It is evident that allegoresis was of paramount importance for the ancient thinkers, irrespective of whether they would value it positively (Preplatonic philosophers and Aristotle) or negatively (Plato). Beginning with the 6th century B.C., the ancient practice of allegorical interpretation is motivated by two distinct interests. On the one hand, the practice of allegorical interpretation reflects the more or less “conservative” attachment to the authority of the poet (whether Homer, Hesiod or Orpheus). The purpose of this apologetic allegoresis is to exonerate poetry from the charges leveled at it by the first philosophers and, though to a lesser degree, historians. Generally, these allegorists seek to save the traditional paideia that builds on the works of the poets. On the other hand, the practice of allegorical interpretation reflects also the more or less “progressive” desire to make original use of the authority of the poet (whether Homer, Hesiod or Orpheus) so as to promote a given philosophical doctrine. The objective of this instrumental allegoresis is to exculpate philosophy from the accusations brought against it by the more conservative circles. Needless to say, these allegorists significantly contribute to the process of the gradual replacing of the mythical view of the world with its more philosophical explanation. The present book suggests that it is the philosophy of Aristotle that should be regarded as a sort of acme in the development of ancient hermeneutics. The reasons for this are twofold. On the one hand, the Stagirite positively values the practice of allegoresis, rehabilitating, thus, the tradition of Preplatonic philosophy against Plato. And, on the other hand, Aristotle initiates the theoretical reflection on figurative (“enigmatic”) language. Hence, in Aristotle we encounter not only the practice of allegoresis, but also the theory of allegory (although the philosopher does not use the term allēgoría). With the situation being as it is, the significance of Aristotle’s work cannot be overestimated. First of all, the Stagirite introduces the concept of metaphor into the then philosophical considerations. From that moment onwards, the phenomenon of figurative language becomes an important philosophical issue. After Aristo-tle, the preponderance of thinkers would feel obliged to specify the rules for the appropriate use of figurative language and the techniques of its correct interpretation. Furthermore, Aristotle ascribes to metaphor (and to various other “excellent” sayings) the function of increasing and enhancing our knowledge. Thus, according to the Stagirite, figurative language is not only an ornamental device, but it can also have a significant explanatory power. Finally, Aristotle observes that figurative expressions cause words to become ambiguous. In this context, the philosopher notices that ambiguity can enrich the language of a poet, but it can also hinder a dialectical discussion. Accordingly, Aristotle is inclined to value polysemy either positively or negatively. Importantly, however, the Stagirite is perfectly aware of the fact that in natural languages ambiguity is unavoidable. This is why Aristotle initiates a syste-matic reflection on the phenomenon of ambiguity and distinguishes its various kinds. In Aristotle, ambiguity is, then, both a problem that needs to be identified and a tool that can help in elucidating intricate philosophical issues. This unique approach to ambiguity and figurative (“enigmatic”) language enabled Aristotle to formulate invaluable intuitions that still await appropriate recognition.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper introduces an algorithm that uses boosting to learn a distance measure for multiclass k-nearest neighbor classification. Given a family of distance measures as input, AdaBoost is used to learn a weighted distance measure, that is a linear combination of the input measures. The proposed method can be seen both as a novel way to learn a distance measure from data, and as a novel way to apply boosting to multiclass recognition problems, that does not require output codes. In our approach, multiclass recognition of objects is reduced into a single binary recognition task, defined on triples of objects. Preliminary experiments with eight UCI datasets yield no clear winner among our method, boosting using output codes, and k-nn classification using an unoptimized distance measure. Our algorithm did achieve lower error rates in some of the datasets, which indicates that, in some domains, it may lead to better results than existing methods.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A framework for the simultaneous localization and recognition of dynamic hand gestures is proposed. At the core of this framework is a dynamic space-time warping (DSTW) algorithm, that aligns a pair of query and model gestures in both space and time. For every frame of the query sequence, feature detectors generate multiple hand region candidates. Dynamic programming is then used to compute both a global matching cost, which is used to recognize the query gesture, and a warping path, which aligns the query and model sequences in time, and also finds the best hand candidate region in every query frame. The proposed framework includes translation invariant recognition of gestures, a desirable property for many HCI systems. The performance of the approach is evaluated on a dataset of hand signed digits gestured by people wearing short sleeve shirts, in front of a background containing other non-hand skin-colored objects. The algorithm simultaneously localizes the gesturing hand and recognizes the hand-signed digit. Although DSTW is illustrated in a gesture recognition setting, the proposed algorithm is a general method for matching time series, that allows for multiple candidate feature vectors to be extracted at each time step.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Hand signals are commonly used in applications such as giving instructions to a pilot for airplane take off or direction of a crane operator by a foreman on the ground. A new algorithm for recognizing hand signals from a single camera is proposed. Typically, tracked 2D feature positions of hand signals are matched to 2D training images. In contrast, our approach matches the 2D feature positions to an archive of 3D motion capture sequences. The method avoids explicit reconstruction of the 3D articulated motion from 2D image features. Instead, the matching between the 2D and 3D sequence is done by backprojecting the 3D motion capture data onto 2D. Experiments demonstrate the effectiveness of the approach in an example application: recognizing six classes of basketball referee hand signals in video.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Modal matching is a new method for establishing correspondences and computing canonical descriptions. The method is based on the idea of describing objects in terms of generalized symmetries, as defined by each object's eigenmodes. The resulting modal description is used for object recognition and categorization, where shape similarities are expressed as the amounts of modal deformation energy needed to align the two objects. In general, modes provide a global-to-local ordering of shape deformation and thus allow for selecting which types of deformations are used in object alignment and comparison. In contrast to previous techniques, which required correspondence to be computed with an initial or prototype shape, modal matching utilizes a new type of finite element formulation that allows for an object's eigenmodes to be computed directly from available image information. This improved formulation provides greater generality and accuracy, and is applicable to data of any dimensionality. Correspondence results with 2-D contour and point feature data are shown, and recognition experiments with 2-D images of hand tools and airplanes are described.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A new deformable shape-based method for color region segmentation is described. The method includes two stages: over-segmentation using a traditional color region segmentation algorithm, followed by deformable model-based region merging via grouping and hypothesis selection. During the second stage, region merging and object identification are executed simultaneously. A statistical shape model is used to estimate the likelihood of region groupings and model hypotheses. The prior distribution on deformation parameters is precomputed using principal component analysis over a training set of region groupings. Once trained, the system autonomously segments deformed shapes from the background, while not merging them with similarly colored adjacent objects. Furthermore, the recovered parametric shape model can be used directly in object recognition and comparison. Experiments in segmentation and image retrieval are reported.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Many real world image analysis problems, such as face recognition and hand pose estimation, involve recognizing a large number of classes of objects or shapes. Large margin methods, such as AdaBoost and Support Vector Machines (SVMs), often provide competitive accuracy rates, but at the cost of evaluating a large number of binary classifiers, thus making it difficult to apply such methods when thousands or millions of classes need to be recognized. This thesis proposes a filter-and-refine framework, whereby, given a test pattern, a small number of candidate classes can be identified efficiently at the filter step, and computationally expensive large margin classifiers are used to evaluate these candidates at the refine step. Two different filtering methods are proposed, ClassMap and OVA-VS (One-vs.-All classification using Vector Search). ClassMap is an embedding-based method, works for both boosted classifiers and SVMs, and tends to map the patterns and their associated classes close to each other in a vector space. OVA-VS maps OVA classifiers and test patterns to vectors based on the weights and outputs of weak classifiers of the boosting scheme. At runtime, finding the strongest-responding OVA classifier becomes a classical vector search problem, where well-known methods can be used to gain efficiency. In our experiments, the proposed methods achieve significant speed-ups, in some cases up to two orders of magnitude, compared to exhaustive evaluation of all OVA classifiers. This was achieved in hand pose recognition and face recognition systems where the number of classes ranges from 535 to 48,600.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A mechanism is proposed that integrates low-level (image processing), mid-level (recursive 3D trajectory estimation), and high-level (action recognition) processes. It is assumed that the system observes multiple moving objects via a single, uncalibrated video camera. A novel extended Kalman filter formulation is used in estimating the relative 3D motion trajectories up to a scale factor. The recursive estimation process provides a prediction and error measure that is exploited in higher-level stages of action recognition. Conversely, higher-level mechanisms provide feedback that allows the system to reliably segment and maintain the tracking of moving objects before, during, and after occlusion. The 3D trajectory, occlusion, and segmentation information are utilized in extracting stabilized views of the moving object. Trajectory-guided recognition (TGR) is proposed as a new and efficient method for adaptive classification of action. The TGR approach is demonstrated using "motion history images" that are then recognized via a mixture of Gaussian classifier. The system was tested in recognizing various dynamic human outdoor activities; e.g., running, walking, roller blading, and cycling. Experiments with synthetic data sets are used to evaluate stability of the trajectory estimator with respect to noise.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The performance of different classification approaches is evaluated using a view-based approach for motion representation. The view-based approach uses computer vision and image processing techniques to register and process the video sequence. Two motion representations called Motion Energy Images and Motion History Image are then constructed. These representations collapse the temporal component in a way that no explicit temporal analysis or sequence matching is needed. Statistical descriptions are then computed using moment-based features and dimensionality reduction techniques. For these tests, we used 7 Hu moments, which are invariant to scale and translation. Principal Components Analysis is used to reduce the dimensionality of this representation. The system is trained using different subjects performing a set of examples of every action to be recognized. Given these samples, K-nearest neighbor, Gaussian, and Gaussian mixture classifiers are used to recognize new actions. Experiments are conducted using instances of eight human actions (i.e., eight classes) performed by seven different subjects. Comparisons in the performance among these classifiers under different conditions are analyzed and reported. Our main goals are to test this dimensionality-reduced representation of actions, and more importantly to use this representation to compare the advantages of different classification approaches in this recognition task.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A combined 2D, 3D approach is presented that allows for robust tracking of moving people and recognition of actions. It is assumed that the system observes multiple moving objects via a single, uncalibrated video camera. Low-level features are often insufficient for detection, segmentation, and tracking of non-rigid moving objects. Therefore, an improved mechanism is proposed that integrates low-level (image processing), mid-level (recursive 3D trajectory estimation), and high-level (action recognition) processes. A novel extended Kalman filter formulation is used in estimating the relative 3D motion trajectories up to a scale factor. The recursive estimation process provides a prediction and error measure that is exploited in higher-level stages of action recognition. Conversely, higher-level mechanisms provide feedback that allows the system to reliably segment and maintain the tracking of moving objects before, during, and after occlusion. The 3D trajectory, occlusion, and segmentation information are utilized in extracting stabilized views of the moving object that are then used as input to action recognition modules. Trajectory-guided recognition (TGR) is proposed as a new and efficient method for adaptive classification of action. The TGR approach is demonstrated using "motion history images" that are then recognized via a mixture-of-Gaussians classifier. The system was tested in recognizing various dynamic human outdoor activities: running, walking, roller blading, and cycling. Experiments with real and synthetic data sets are used to evaluate stability of the trajectory estimator with respect to noise.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Nearest neighbor classifiers are simple to implement, yet they can model complex non-parametric distributions, and provide state-of-the-art recognition accuracy in OCR databases. At the same time, they may be too slow for practical character recognition, especially when they rely on similarity measures that require computationally expensive pairwise alignments between characters. This paper proposes an efficient method for computing an approximate similarity score between two characters based on their exact alignment to a small number of prototypes. The proposed method is applied to both online and offline character recognition, where similarity is based on widely used and computationally expensive alignment methods, i.e., Dynamic Time Warping and the Hungarian method respectively. In both cases significant recognition speedup is obtained at the expense of only a minor increase in recognition error.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Nearest neighbor retrieval is the task of identifying, given a database of objects and a query object, the objects in the database that are the most similar to the query. Retrieving nearest neighbors is a necessary component of many practical applications, in fields as diverse as computer vision, pattern recognition, multimedia databases, bioinformatics, and computer networks. At the same time, finding nearest neighbors accurately and efficiently can be challenging, especially when the database contains a large number of objects, and when the underlying distance measure is computationally expensive. This thesis proposes new methods for improving the efficiency and accuracy of nearest neighbor retrieval and classification in spaces with computationally expensive distance measures. The proposed methods are domain-independent, and can be applied in arbitrary spaces, including non-Euclidean and non-metric spaces. In this thesis particular emphasis is given to computer vision applications related to object and shape recognition, where expensive non-Euclidean distance measures are often needed to achieve high accuracy. The first contribution of this thesis is the BoostMap algorithm for embedding arbitrary spaces into a vector space with a computationally efficient distance measure. Using this approach, an approximate set of nearest neighbors can be retrieved efficiently - often orders of magnitude faster than retrieval using the exact distance measure in the original space. The BoostMap algorithm has two key distinguishing features with respect to existing embedding methods. First, embedding construction explicitly maximizes the amount of nearest neighbor information preserved by the embedding. Second, embedding construction is treated as a machine learning problem, in contrast to existing methods that are based on geometric considerations. The second contribution is a method for constructing query-sensitive distance measures for the purposes of nearest neighbor retrieval and classification. In high-dimensional spaces, query-sensitive distance measures allow for automatic selection of the dimensions that are the most informative for each specific query object. It is shown theoretically and experimentally that query-sensitivity increases the modeling power of embeddings, allowing embeddings to capture a larger amount of the nearest neighbor structure of the original space. The third contribution is a method for speeding up nearest neighbor classification by combining multiple embedding-based nearest neighbor classifiers in a cascade. In a cascade, computationally efficient classifiers are used to quickly classify easy cases, and classifiers that are more computationally expensive and also more accurate are only applied to objects that are harder to classify. An interesting property of the proposed cascade method is that, under certain conditions, classification time actually decreases as the size of the database increases, a behavior that is in stark contrast to the behavior of typical nearest neighbor classification systems. The proposed methods are evaluated experimentally in several different applications: hand shape recognition, off-line character recognition, online character recognition, and efficient retrieval of time series. In all datasets, the proposed methods lead to significant improvements in accuracy and efficiency compared to existing state-of-the-art methods. In some datasets, the general-purpose methods introduced in this thesis even outperform domain-specific methods that have been custom-designed for such datasets.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Ongoing research at Boston University has produced computational models of biological vision and learning that embody a growing corpus of scientific data and predictions. Vision models perform long-range grouping and figure/ground segmentation, and memory models create attentionally controlled recognition codes that intrinsically cornbine botton-up activation and top-down learned expectations. These two streams of research form the foundation of novel dynamically integrated systems for image understanding. Simulations using multispectral images illustrate road completion across occlusions in a cluttered scene and information fusion from incorrect labels that are simultaneously inconsistent and correct. The CNS Vision and Technology Labs (cns.bu.edulvisionlab and cns.bu.edu/techlab) are further integrating science and technology through analysis, testing, and development of cognitive and neural models for large-scale applications, complemented by software specification and code distribution.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Both animals and mobile robots, or animats, need adaptive control systems to guide their movements through a novel environment. Such control systems need reactive mechanisms for exploration, and learned plans to efficiently reach goal objects once the environment is familiar. How reactive and planned behaviors interact together in real time, and arc released at the appropriate times, during autonomous navigation remains a major unsolved problern. This work presents an end-to-end model to address this problem, named SOVEREIGN: A Self-Organizing, Vision, Expectation, Recognition, Emotion, Intelligent, Goal-oriented Navigation system. The model comprises several interacting subsystems, governed by systems of nonlinear differential equations. As the animat explores the environment, a vision module processes visual inputs using networks that arc sensitive to visual form and motion. Targets processed within the visual form system arc categorized by real-time incremental learning. Simultaneously, visual target position is computed with respect to the animat's body. Estimates of target position activate a motor system to initiate approach movements toward the target. Motion cues from animat locomotion can elicit orienting head or camera movements to bring a never target into view. Approach and orienting movements arc alternately performed during animat navigation. Cumulative estimates of each movement, based on both visual and proprioceptive cues, arc stored within a motor working memory. Sensory cues are stored in a parallel sensory working memory. These working memories trigger learning of sensory and motor sequence chunks, which together control planned movements. Effective chunk combinations arc selectively enhanced via reinforcement learning when the animat is rewarded. The planning chunks effect a gradual transition from reactive to planned behavior. The model can read-out different motor sequences under different motivational states and learns more efficient paths to rewarded goals as exploration proceeds. Several volitional signals automatically gate the interactions between model subsystems at appropriate times. A 3-D visual simulation environment reproduces the animat's sensory experiences as it moves through a simplified spatial environment. The SOVEREIGN model exhibits robust goal-oriented learning of sequential motor behaviors. Its biomimctic structure explicates a number of brain processes which are involved in spatial navigation.