931 resultados para Automatic theorem proving
Resumo:
The digital divide continues to challenge political and academic circles worldwide. A range of policy solutions is briefly evaluated, from laissez-faire on the right to “arithmetic” egalitarianism on the left. The article recasts the digital divide as a problem for the social distribution of presumptively important information (e.g., electoral data, news, science) within postindustrial society. Endorsing in general terms the left-liberal approach of differential or “geometric” egalitarianism, it seeks to invest this with greater precision, and therefore utility, by means of a possibly original synthesis of the ideas of John Rawls and R. H. Tawney. It is argued that, once certain categories of information are accorded the status of “primary goods,” their distribution must then comply with principles of justice as articulated by those major 20th century exponents of ethical social democracy. The resultant Rawls-Tawney theorem, if valid, might augment the portfolio of options for interventionist information policy in the 21st century
Resumo:
Hardy, N. W., Barnes, D. P., Lee, L. H. (1989). Automatic diagnosis of task faults in flexible manufacturing systems. Robotica, 7 (1):25-35
Resumo:
Lee, M., Hardy, N., & Barnes, D. P. (1984). Research into automatic error recovery. 65-69. Paper presented at 4th International Conference on Robot Vision and Sensory Controls, London, London, United Kingdom.
Resumo:
Meng Q. and Lee M.H., Automatic Error Recovery in Behaviour-Based Assistive Robots with Learning from Experience, in Proc. INES 2001, 5th IEEE Int. Conf. on Intelligent Engineering Systems, Helsinki, Finland, Sept 2001, pp291-296.
Resumo:
Liu, Yonghuai. Automatic 3d free form shape matching using the graduated assignment algorithm. Pattern Recognition, vol. 38, no. 10, pp. 1615-1631, 2005.
Resumo:
Q. Meng and M.H. Lee, 'Biologically inspired automatic construction of cross-modal mapping in robotic eye/hand systems', IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2006,) ,4742-49, Beijing, 2006.
Resumo:
R. Marti, R. Zwiggelaar, C.M.E. Rubin, 'Automatic point correspondence and registration based on linear structures', International Journal of Pattern Recognition and Artificial Intelligence 16 (3), 331-340 (2002)
Resumo:
C.R. Bull, N.J.B. McFarlane, R. Zwiggelaar, C.J. Allen and T.T. Mottram, 'Inspection of teats by colour image analysis for automatic milking systems', Computers and Electronics in Agriculture 15 (1), 15-26 (1996)
Resumo:
Gough, John, 'Quantum Stratonovich Stochastic Calculus and the Quantum Wong-Zakai Theorem', Journal of Mathematical Physics. 47, 113509, (2006)
Resumo:
http://www.archive.org/details/christversuskris014648mbp
Resumo:
In gesture and sign language video sequences, hand motion tends to be rapid, and hands frequently appear in front of each other or in front of the face. Thus, hand location is often ambiguous, and naive color-based hand tracking is insufficient. To improve tracking accuracy, some methods employ a prediction-update framework, but such methods require careful initialization of model parameters, and tend to drift and lose track in extended sequences. In this paper, a temporal filtering framework for hand tracking is proposed that can initialize and reset itself without human intervention. In each frame, simple features like color and motion residue are exploited to identify multiple candidate hand locations. The temporal filter then uses the Viterbi algorithm to select among the candidates from frame to frame. The resulting tracking system can automatically identify video trajectories of unambiguous hand motion, and detect frames where tracking becomes ambiguous because of occlusions or overlaps. Experiments on video sequences of several hundred frames in duration demonstrate the system's ability to track hands robustly, to detect and handle tracking ambiguities, and to extract the trajectories of unambiguous hand motion.
Resumo:
We developed an automated system that registers chest CT scans temporally. Our registration method matches corresponding anatomical landmarks to obtain initial registration parameters. The initial point-to-point registration is then generalized to an iterative surface-to-surface registration method. Our "goodness-of-fit" measure is evaluated at each step in the iterative scheme until the registration performance is sufficient. We applied our method to register the 3D lung surfaces of 11 pairs of chest CT scans and report promising registration performance.
Resumo:
An automated system for detection of head movements is described. The goal is to label relevant head gestures in video of American Sign Language (ASL) communication. In the system, a 3D head tracker recovers head rotation and translation parameters from monocular video. Relevant head gestures are then detected by analyzing the length and frequency of the motion signal's peaks and valleys. Each parameter is analyzed independently, due to the fact that a number of relevant head movements in ASL are associated with major changes around one rotational axis. No explicit training of the system is necessary. Currently, the system can detect "head shakes." In experimental evaluation, classification performance is compared against ground-truth labels obtained from ASL linguists. Initial results are promising, as the system matches the linguists' labels in a significant number of cases.
Resumo:
A model for representing music scores in a form suitable for general processing by a music-analyst-programmer is proposed and implemented. Typical input to the model consists of one or more pieces of music which are encoded in a file-based score representation. File-based representations are in a form unsuited for general processing, as they do not provide a suitable level of abstraction for a programmer-analyst. Instead, a representation is created giving a programmer's view of the score. This frees the analyst-programmer from implementation details, that otherwise would form a substantial barrier to progress. The score representation uses an object-oriented approach to create a natural and robust software environment for the musicologist. The system is used to explore ways in which it could benefit musicologists. Methodologies for analysing music corpora are presented in a series of analytic examples which illustrate some of the potential of this model. Proving hypotheses or performing analysis on corpora involves the construction of algorithms. Some unique aspects of using this score model for corpus-based musicology are: - Algorithms impose a discipline which arises from the necessity for formalism. - Automatic analysis enables musicologists to complete tasks that otherwise would be infeasible because of limitations of their energy, attentiveness, accuracy and time.
Resumo:
The advent of modern wireless technologies has seen a shift in focus towards the design and development of educational systems for deployment through mobile devices. The use of mobile phones, tablets and Personal Digital Assistants (PDAs) is steadily growing across the educational sector as a whole. Mobile learning (mLearning) systems developed for deployment on such devices hold great significance for the future of education. However, mLearning systems must be built around the particular learner’s needs based on both their motivation to learn and subsequent learning outcomes. This thesis investigates how biometric technologies, in particular accelerometer and eye-tracking technologies, could effectively be employed within the development of mobile learning systems to facilitate the needs of individual learners. The creation of personalised learning environments must enable the achievement of improved learning outcomes for users, particularly at an individual level. Therefore consideration is given to individual learning-style differences within the electronic learning (eLearning) space. The overall area of eLearning is considered and areas such as biometric technology and educational psychology are explored for the development of personalised educational systems. This thesis explains the basis of the author’s hypotheses and presents the results of several studies carried out throughout the PhD research period. These results show that both accelerometer and eye-tracking technologies can be employed as an Human Computer Interaction (HCI) method in the detection of student learning-styles to facilitate the provision of automatically adapted eLearning spaces. Finally the author provides recommendations for developers in the creation of adaptive mobile learning systems through the employment of biometric technology as a user interaction tool within mLearning applications. Further research paths are identified and a roadmap for future of research in this area is defined.