969 resultados para Human Machine Interfaces
Resumo:
Mottling is one of the key defects in offset-printing. Mottling can be defined as unwanted unevenness of print. In this work, diameter of a mottle spot is defined between 0.5-10.0 mm. There are several types of mottling, but the reason behind the problem is still not fully understood. Several commercial machine vision products for the evaluation of print unevenness have been presented. Two of these methods used in these products have been implemented in this thesis. The one is the cluster method and the other is the band-pass method. The properties of human vision system have been taken into account in the implementation of these two methods. An index produced by the cluster method is a weighted sum of the number of found spots, and an index produced by band-pass method is a weighted sum of coefficients of variations of gray-levels for each spatial band. Both methods produce larger indices for visually poor samples, so they can discern good samples from the poor ones. The difference between the indices for good and poor samples is slightly larger produced by the cluster method. 11 However, without the samples evaluated by human experts, the goodness of these results is still questionable. This comparison will be left to the next phase of the project.
Resumo:
Print quality and the printability of paper are very important attributes when modern printing applications are considered. In prints containing images, high print quality is a basic requirement. Tone unevenness and non uniform glossiness of printed products are the most disturbing factors influencing overall print quality. These defects are caused by non ideal interactions of paper, ink and printing devices in high speed printing processes. Since print quality is a perceptive characteristic, the measurement of unevenness according to human vision is a significant problem. In this thesis, the mottling phenomenon is studied. Mottling is a printing defect characterized by a spotty, non uniform appearance in solid printed areas. Print mottle is usually the result of uneven ink lay down or non uniform ink absorption across the paper surface, especially visible in mid tone imagery or areas of uniform color, such as solids and continuous tone screen builds. By using existing knowledge on visual perception and known methods to quantify print tone variation, a new method for print unevenness evaluation is introduced. The method is compared to previous results in the field and is supported by psychometric experiments. Pilot studies are made to estimate the effect of optical paper characteristics prior to printing, on the unevenness of the printed area after printing. Instrumental methods for print unevenness evaluation have been compared and the results of the comparison indicate that the proposed method produces better results in terms of visual evaluation correspondence. The method has been successfully implemented as ail industrial application and is proved to be a reliable substitute to visual expertise.
Resumo:
The papermaking industry has been continuously developing intelligent solutions to characterize the raw materials it uses, to control the manufacturing process in a robust way, and to guarantee the desired quality of the end product. Based on the much improved imaging techniques and image-based analysis methods, it has become possible to look inside the manufacturing pipeline and propose more effective alternatives to human expertise. This study is focused on the development of image analyses methods for the pulping process of papermaking. Pulping starts with wood disintegration and forming the fiber suspension that is subsequently bleached, mixed with additives and chemicals, and finally dried and shipped to the papermaking mills. At each stage of the process it is important to analyze the properties of the raw material to guarantee the product quality. In order to evaluate properties of fibers, the main component of the pulp suspension, a framework for fiber characterization based on microscopic images is proposed in this thesis as the first contribution. The framework allows computation of fiber length and curl index correlating well with the ground truth values. The bubble detection method, the second contribution, was developed in order to estimate the gas volume at the delignification stage of the pulping process based on high-resolution in-line imaging. The gas volume was estimated accurately and the solution enabled just-in-time process termination whereas the accurate estimation of bubble size categories still remained challenging. As the third contribution of the study, optical flow computation was studied and the methods were successfully applied to pulp flow velocity estimation based on double-exposed images. Finally, a framework for classifying dirt particles in dried pulp sheets, including the semisynthetic ground truth generation, feature selection, and performance comparison of the state-of-the-art classification techniques, was proposed as the fourth contribution. The framework was successfully tested on the semisynthetic and real-world pulp sheet images. These four contributions assist in developing an integrated factory-level vision-based process control.
Resumo:
The subject of the thesis is automatic sentence compression with machine learning, so that the compressed sentences remain both grammatical and retain their essential meaning. There are multiple possible uses for the compression of natural language sentences. In this thesis the focus is generation of television program subtitles, which often are compressed version of the original script of the program. The main part of the thesis consists of machine learning experiments for automatic sentence compression using different approaches to the problem. The machine learning methods used for this work are linear-chain conditional random fields and support vector machines. Also we take a look which automatic text analysis methods provide useful features for the task. The data used for machine learning is supplied by Lingsoft Inc. and consists of subtitles in both compressed an uncompressed form. The models are compared to a baseline system and comparisons are made both automatically and also using human evaluation, because of the potentially subjective nature of the output. The best result is achieved using a CRF - sequence classification using a rich feature set. All text analysis methods help classification and most useful method is morphological analysis. Tutkielman aihe on suomenkielisten lauseiden automaattinen tiivistäminen koneellisesti, niin että lyhennetyt lauseet säilyttävät olennaisen informaationsa ja pysyvät kieliopillisina. Luonnollisen kielen lauseiden tiivistämiselle on monta käyttötarkoitusta, mutta tässä tutkielmassa aihetta lähestytään television ohjelmien tekstittämisen kautta, johon käytännössä kuuluu alkuperäisen tekstin lyhentäminen televisioruudulle paremmin sopivaksi. Tutkielmassa kokeillaan erilaisia koneoppimismenetelmiä tekstin automaatiseen lyhentämiseen ja tarkastellaan miten hyvin erilaiset luonnollisen kielen analyysimenetelmät tuottavat informaatiota, joka auttaa näitä menetelmiä lyhentämään lauseita. Lisäksi tarkastellaan minkälainen lähestymistapa tuottaa parhaan lopputuloksen. Käytetyt koneoppimismenetelmät ovat tukivektorikone ja lineaarisen sekvenssin mallinen CRF. Koneoppimisen tukena käytetään tekstityksiä niiden eri käsittelyvaiheissa, jotka on saatu Lingsoft OY:ltä. Luotuja malleja vertaillaan Lopulta mallien lopputuloksia evaluoidaan automaattisesti ja koska teksti lopputuksena on jossain määrin subjektiivinen myös ihmisarviointiin perustuen. Vertailukohtana toimii kirjallisuudesta poimittu menetelmä. Tutkielman tuloksena paras lopputulos saadaan aikaan käyttäen CRF sekvenssi-luokittelijaa laajalla piirrejoukolla. Kaikki kokeillut teksin analyysimenetelmät auttavat luokittelussa, joista tärkeimmän panoksen antaa morfologinen analyysi.
Resumo:
We introduce a procedure to infer the repeated-game strategies that generate actions in experimental choice data. We apply the technique to set of experiments where human subjects play a repeated Prisoner's Dilemma. The technique suggests that two types of strategies underly the data.
Resumo:
Dans ce travail, nous explorons la faisabilité de doter les machines de la capacité de prédire, dans un contexte d'interaction homme-machine (IHM), l'émotion d'un utilisateur, ainsi que son intensité, de manière instantanée pour une grande variété de situations. Plus spécifiquement, une application a été développée, appelée machine émotionnelle, capable de «comprendre» la signification d'une situation en se basant sur le modèle théorique d'évaluation de l'émotion Ortony, Clore et Collins (OCC). Cette machine est apte, également, à prédire les réactions émotionnelles des utilisateurs, en combinant des versions améliorées des k plus proches voisins et des réseaux de neurones. Une procédure empirique a été réalisée pour l'acquisition des données. Ces dernières ont fourni une connaissance consistante aux algorithmes d'apprentissage choisis et ont permis de tester la performance de la machine. Les résultats obtenus montrent que la machine émotionnelle proposée est capable de produire de bonnes prédictions. Une telle réalisation pourrait encourager son utilisation future dans des domaines exploitant la reconnaissance automatique de l'émotion.
Resumo:
Thèse numérisée par la Division de la gestion de documents et des archives de l'Université de Montréal
Resumo:
De plus en plus de recherches sur les Interactions Humain-Machine (IHM) tentent d’effectuer des analyses fines de l’interaction afin de faire ressortir ce qui influence les comportements des utilisateurs. Tant au niveau de l’évaluation de la performance que de l’expérience des utilisateurs, on note qu’une attention particulière est maintenant portée aux réactions émotionnelles et cognitives lors de l’interaction. Les approches qualitatives standards sont limitées, car elles se fondent sur l’observation et des entrevues après l’interaction, limitant ainsi la précision du diagnostic. L’expérience utilisateur et les réactions émotionnelles étant de nature hautement dynamique et contextualisée, les approches d’évaluation doivent l’être de même afin de permettre un diagnostic précis de l’interaction. Cette thèse présente une approche d’évaluation quantitative et dynamique qui permet de contextualiser les réactions des utilisateurs afin d’en identifier les antécédents dans l’interaction avec un système. Pour ce faire, ce travail s’articule autour de trois axes. 1) La reconnaissance automatique des buts et de la structure de tâches de l’utilisateur, à l’aide de mesures oculométriques et d’activité dans l’environnement par apprentissage machine. 2) L’inférence de construits psychologiques (activation, valence émotionnelle et charge cognitive) via l’analyse des signaux physiologiques. 3) Le diagnostic de l‘interaction reposant sur le couplage dynamique des deux précédentes opérations. Les idées et le développement de notre approche sont illustrés par leur application dans deux contextes expérimentaux : le commerce électronique et l’apprentissage par simulation. Nous présentons aussi l’outil informatique complet qui a été implémenté afin de permettre à des professionnels en évaluation (ex. : ergonomes, concepteurs de jeux, formateurs) d’utiliser l’approche proposée pour l’évaluation d’IHM. Celui-ci est conçu de manière à faciliter la triangulation des appareils de mesure impliqués dans ce travail et à s’intégrer aux méthodes classiques d’évaluation de l’interaction (ex. : questionnaires et codage des observations).
Resumo:
This paper presents the application of wavelet processing in the domain of handwritten character recognition. To attain high recognition rate, robust feature extractors and powerful classifiers that are invariant to degree of variability of human writing are needed. The proposed scheme consists of two stages: a feature extraction stage, which is based on Haar wavelet transform and a classification stage that uses support vector machine classifier. Experimental results show that the proposed method is effective
Resumo:
In 1950, the English mathematician Alan Mathison Turing proposed the basis of what some authors consider the test that a machine must pass to establish that it can think. This test is basically a game; nevertheless, it has had great infl uence in the development of the theories of the mind performance. The game specifications and some of its repercussions in the conception of thinking, the consciousness and the human will, will be ramifications of the path that will take us through the beginning of the artificial intelligence, passing along some of its singular manifestations, to culminate in the posing of certain restrictions of its fundaments.
Resumo:
Rationale. Smokers modify their smoking behaviour when switching from their usual product to higher or lower tar and nicotine-yield cigarettes. Objective. The aims of the current study were to assess the influence of varying nicotine yields at constant tar yield on human puffing measures, nicotine deliveries under human smoking conditions and the sensory response to mainstream cigarette smoke. These assessments would allow an evaluation of the degree of compensation and the various possible causes of changes, if any. Methods. The participants were 13 regular smokers of commercial or hand-rolled cigarettes. They were tested with four cigarettes, which exhibited a wide range of nicotine to 'tar' ratios at a relatively constant 'tar' yield. Their smoking behaviour was monitored by placing the test cigarettes into an orifice-type holder/flowmeter attached to a custom-built smoker behaviour analyser. In addition, a comprehensive sensory evaluation of the products was carried out. Results. The differences in the nicotine to tar ratios of the samples did not significantly influence the puffing behaviour patterns, i.e. puff number and interval, total and average puff volume, integrated pressure and puff duration. Additionally the pre- to post-exhaled CO boosts were not significantly influenced by the experimental samples used in the study. However, the nicotine yields obtained by the smokers were significantly influenced by the machine-smoked nicotine yields or the nicotine to tar ratios of the samples. The machine-smoked nicotine yields were highly correlated with the nicotine yields obtained under human smoking conditions. For the sensory evaluation, there was only a significant difference between the samples in the intensity of the impact. Conclusion. These observations imply that these puffing variables are not controlled by the nicotine yield of the cigarette.
Resumo:
It is now possible to directly link the human nervous system to a computer and thence onto the Internet. From an electronic and mental viewpoint this means that the Internet becomes an extension of the human nervous system (and vice versa). Such a connection on a regular or mass basis will have far reaching effects for society. In this article the authors discuss their own practical implant self-experimentation, especially insofar as it relates to extending the human nervous system. Trials involving an intercontinental link up are described. As well as technical aspects of the work, social, moral and ethical issues, as perceived by the authors, are weighed against potential technical gains. The authors also look at technical limitations inherent in the co-evolution of Internet implanted individuals as well as the future distribution of intelligence between human and machine.
Resumo:
The perspex machine arose from the unification of projective geometry with the Turing machine. It uses a total arithmetic, called transreal arithmetic, that contains real arithmetic and allows division by zero. Transreal arithmetic is redefined here. The new arithmetic has both a positive and a negative infinity which lie at the extremes of the number line, and a number nullity that lies off the number line. We prove that nullity, 0/0, is a number. Hence a number may have one of four signs: negative, zero, positive, or nullity. It is, therefore, impossible to encode the sign of a number in one bit, as floating-, point arithmetic attempts to do, resulting in the difficulty of having both positive and negative zeros and NaNs. Transrational arithmetic is consistent with Cantor arithmetic. In an extension to real arithmetic, the product of zero, an infinity, or nullity with its reciprocal is nullity, not unity. This avoids the usual contradictions that follow from allowing division by zero. Transreal arithmetic has a fixed algebraic structure and does not admit options as IEEE, floating-point arithmetic does. Most significantly, nullity has a simple semantics that is related to zero. Zero means "no value" and nullity means "no information." We argue that nullity is as useful to a manufactured computer as zero is to a human computer. The perspex machine is intended to offer one solution to the mind-body problem by showing how the computable aspects of mind and. perhaps, the whole of mind relates to the geometrical aspects of body and, perhaps, the whole of body. We review some of Turing's writings and show that he held the view that his machine has spatial properties. In particular, that it has the property of being a 7D lattice of compact spaces. Thus, we read Turing as believing that his machine relates computation to geometrical bodies. We simplify the perspex machine by substituting an augmented Euclidean geometry for projective geometry. This leads to a general-linear perspex-machine which is very much easier to pro-ram than the original perspex-machine. We then show how to map the whole of perspex space into a unit cube. This allows us to construct a fractal of perspex machines with the cardinality of a real-numbered line or space. This fractal is the universal perspex machine. It can solve, in unit time, the halting problem for itself and for all perspex machines instantiated in real-numbered space, including all Turing machines. We cite an experiment that has been proposed to test the physical reality of the perspex machine's model of time, but we make no claim that the physical universe works this way or that it has the cardinality of the perspex machine. We leave it that the perspex machine provides an upper bound on the computational properties of physical things, including manufactured computers and biological organisms, that have a cardinality no greater than the real-number line.
Resumo:
Stroke is a leading cause of disability in particular affecting older people. Although the causes of stroke are well known and it is possible to reduce these risks, there is still a need to improve rehabilitation techniques. Early studies in the literature suggest that early intensive therapies can enhance a patient's recovery. According to physiotherapy literature, attention and motivation are key factors for motor relearning following stroke. Machine mediated therapy offers the potential to improve the outcome of stroke patients engaged on rehabilitation for upper limb motor impairment. Haptic interfaces are a particular group of robots that are attractive due to their ability to safely interact with humans. They can enhance traditional therapy tools, provide therapy "on demand" and can present accurate objective measurements of a patient's progression. Our recent studies suggest the use of tele-presence and VR-based systems can potentially motivate patients to exercise for longer periods of time. The creation of human-like trajectories is essential for retraining upper limb movements of people that have lost manipulation functions following stroke. By coupling models for human arm movement with haptic interfaces and VR technology it is possible to create a new class of robot mediated neuro rehabilitation tools. This paper provides an overview on different approaches to robot mediated therapy and describes a system based on haptics and virtual reality visualisation techniques, where particular emphasis is given to different control strategies for interaction derived from minimum jerk theory and the aid of virtual and mixed reality based exercises.
Resumo:
Current force feedback, haptic interface devices are generally limited to the display of low frequency, high amplitude spatial data. A typical device consists of a low impedance framework of one or more degrees-of-freedom (dof), allowing a user to explore a pre-defined workspace via an end effector such as a handle, thimble, probe or stylus. The movement of the device is then constrained using high gain positional feedback, thus reducing the apparent dof of the device and conveying the illusion of hard contact to the user. Such devices are, however, limited to a narrow bandwidth of frequencies, typically below 30Hz, and are not well suited to the display of surface properties, such as object texture. This paper details a device to augment an existing force feedback haptic display with a vibrotactile display, thus providing a means of conveying low amplitude, high frequency spatial information of object surface properties. 1. Haptics and Haptic Interfaces Haptics is the study of human touch and interaction with the external environment via touch. Information from the human sense of touch can be classified in to two categories, cutaneous and kinesthetic. Cutaneous information is provided via the mechanoreceptive nerve endings in the glabrous skin of the human hand. It is primarily a means of relaying information regarding small-scale details in the form of skin stretch, compression and vibration.