915 resultados para Computer vision syndrome


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Strawberries harvested for processing as frozen fruits are currently de-calyxed manually in the field. This process requires the removal of the stem cap with green leaves (i.e. the calyx) and incurs many disadvantages when performed by hand. Not only does it necessitate the need to maintain cutting tool sanitation, but it also increases labor time and exposure of the de-capped strawberries before in-plant processing. This leads to labor inefficiency and decreased harvest yield. By moving the calyx removal process from the fields to the processing plants, this new practice would reduce field labor and improve management and logistics, while increasing annual yield. As labor prices continue to increase, the strawberry industry has shown great interest in the development and implementation of an automated calyx removal system. In response, this dissertation describes the design, operation, and performance of a full-scale automatic vision-guided intelligent de-calyxing (AVID) prototype machine. The AVID machine utilizes commercially available equipment to produce a relatively low cost automated de-calyxing system that can be retrofitted into existing food processing facilities. This dissertation is broken up into five sections. The first two sections include a machine overview and a 12-week processing plant pilot study. Results of the pilot study indicate the AVID machine is able to de-calyx grade-1-with-cap conical strawberries at roughly 66 percent output weight yield at a throughput of 10,000 pounds per hour. The remaining three sections describe in detail the three main components of the machine: a strawberry loading and orientation conveyor, a machine vision system for calyx identification, and a synchronized multi-waterjet knife calyx removal system. In short, the loading system utilizes rotational energy to orient conical strawberries. The machine vision system determines cut locations through RGB real-time feature extraction. The high-speed multi-waterjet knife system uses direct drive actuation to locate 30,000 psi cutting streams to precise coordinates for calyx removal. Based on the observations and studies performed within this dissertation, the AVID machine is seen to be a viable option for automated high-throughput strawberry calyx removal. A summary of future tasks and further improvements is discussed at the end.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

One of the most visionary goals of Artificial Intelligence is to create a system able to mimic and eventually surpass the intelligence observed in biological systems including, ambitiously, the one observed in humans. The main distinctive strength of humans is their ability to build a deep understanding of the world by learning continuously and drawing from their experiences. This ability, which is found in various degrees in all intelligent biological beings, allows them to adapt and properly react to changes by incrementally expanding and refining their knowledge. Arguably, achieving this ability is one of the main goals of Artificial Intelligence and a cornerstone towards the creation of intelligent artificial agents. Modern Deep Learning approaches allowed researchers and industries to achieve great advancements towards the resolution of many long-standing problems in areas like Computer Vision and Natural Language Processing. However, while this current age of renewed interest in AI allowed for the creation of extremely useful applications, a concerningly limited effort is being directed towards the design of systems able to learn continuously. The biggest problem that hinders an AI system from learning incrementally is the catastrophic forgetting phenomenon. This phenomenon, which was discovered in the 90s, naturally occurs in Deep Learning architectures where classic learning paradigms are applied when learning incrementally from a stream of experiences. This dissertation revolves around the Continual Learning field, a sub-field of Machine Learning research that has recently made a comeback following the renewed interest in Deep Learning approaches. This work will focus on a comprehensive view of continual learning by considering algorithmic, benchmarking, and applicative aspects of this field. This dissertation will also touch on community aspects such as the design and creation of research tools aimed at supporting Continual Learning research, and the theoretical and practical aspects concerning public competitions in this field.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Vision systems are powerful tools playing an increasingly important role in modern industry, to detect errors and maintain product standards. With the enlarged availability of affordable industrial cameras, computer vision algorithms have been increasingly applied in industrial manufacturing processes monitoring. Until a few years ago, industrial computer vision applications relied only on ad-hoc algorithms designed for the specific object and acquisition setup being monitored, with a strong focus on co-designing the acquisition and processing pipeline. Deep learning has overcome these limits providing greater flexibility and faster re-configuration. In this work, the process to be inspected consists in vials’ pack formation entering a freeze-dryer, which is a common scenario in pharmaceutical active ingredient packaging lines. To ensure that the machine produces proper packs, a vision system is installed at the entrance of the freeze-dryer to detect eventual anomalies with execution times compatible with the production specifications. Other constraints come from sterility and safety standards required in pharmaceutical manufacturing. This work presents an overview about the production line, with particular focus on the vision system designed, and about all trials conducted to obtain the final performance. Transfer learning, alleviating the requirement for a large number of training data, combined with data augmentation methods, consisting in the generation of synthetic images, were used to effectively increase the performances while reducing the cost of data acquisition and annotation. The proposed vision algorithm is composed by two main subtasks, designed respectively to vials counting and discrepancy detection. The first one was trained on more than 23k vials (about 300 images) and tested on 5k more (about 75 images), whereas 60 training images and 52 testing images were used for the second one.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Computer Vision Syndrome (CSV): 1) Conjunto de complicações desencadeadas com o acto de fixação para perto, que são experimentadas durante ou após o uso do computador; 2) Distúrbio caracterizado pelo esforço repetitivo de perto traduzindo-se em sintomas oculares e não oculares. Pertinência do estudo: os trabalhadores de telecomunicações desempenham actividades prolongadas de fixação para perto, o que pode originar queixas de fadiga visual devido ao stress exercido sob a convergência acomodativa. Objectivos do estudo: 1) Identificar quais os parâmetros da visão binocular que são mais influenciados pelo uso prolongado do computador; 2) Comparar a visão binocular em dois grupos de indivíduos com e sem sintomatologia ocular.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Päätetyöhön epäillään liittyvän monenlaisia ongelmia. Eniten epäiltyjä ja käsiteltyjä ovat silmien rasitus- ja ärsytysoireet sekä päätetyön kuormittavuus ja näköergonomiset ongelmat. Näkemiseen ja silmiin liittyvät ongelmat näyttöpäätetyöskentelyssä ovat hyvin tavallisia. Niitä kutsutaan termillä Computer Vision Syndrome (CVS). Opinnäytetyömme tarkoituksena oli tutkia kuinka eri katsekulmat vaikuttavat näönrasitusoireisiin sekä olemassa oleviin näköjärjestelmän vikoihin. Kokeessa näyttöpääte sijoitettiin kolmeen eri katsekulmaan. Nämä kulmat olivat 15 astetta horisontaalilinjan yläpuolelle, horisontaalilinja sekä 15 astetta horisontaalilinjan alapuolelle. Tutkimus oli vertaileva ikäryhmien 20-39 ja 40-60-vuotiaat välillä. Opinnäytetyö on kvantitatiivinen. Tutkimusjoukko koostui 80 henkilöstä. VSQ- ja SSQ-kyselylomakkeilla ja mittauksilla saatu aineisto analysoitiin SPSS-ohjelmassa Wilcoxonin merkkitestillä ja Mann-Whitneyn U-testillä. Koko tutkimusjoukon SSQ-oireiden keskiarvoja tarkastellessa voitiin oireiden todeta voimistuneen tehtävän aikana tilastollisesti merkitsevästi. + 15 asteen katsekulmassa havaittiin oireiden voimistumista eniten. SSQ-oireiden jakaminen eri ryhmiin toi esiin tilastollisesti merkitseviä eroja varsinkin silmänrasitusoireiden kohdalla. - 15 asteen katsekulma aiheutti vähiten oireiden arvojen kasvua tehtävän aikana silmänrasitus- ja disorientaatio-oireiden ryhmissä. Tarkasteltaessa koko joukon silmänrasitus- ja disorientaatio-oireita voidaan päätellä näyttöpäätetyön aiheuttavan rasitusoireiden lisääntymistä, koska merkitsevyystaso näissä oli tilastollisesti erittäin merkitsevä. Sekä kokonaisuudessaan että oireryhmittäin oli huomionarvoista, että 20-40-vuotiaat kokivat näyttöpäätetyön rasittavan enemmän. Mittaustulosten perusteella voidaan sanoa, että akkommodaatiolaajuus ja konvergenssikyky olivat merkitsevästi heikompia tehtävän jälkeen. Kyynelfilmin repeämisajan keskiarvo kokeen jälkeen koko tutkimusjoukolla oli normaaliarvoa alhaisempi. Yhteistyökumppanimme voi hyödyntää työmme tuloksia laajemmassa tutkimuksessa. Opinnäytetyömme tukee ammattiosaamistamme toimiessamme näönhuollon asiantuntijoina.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Objetivo: Establecer la correlación entre condiciones de iluminación, ángulo visual, discriminación de contrastes y agudeza visual en la aparición de síntomas visuales en operarios de computador. Materiales y métodos: Estudio de corte transversal y correlación en muestra de 136 trabajadores administrativos de un “call center” perteneciente a una entidad de salud en la ciudad de Bogotá, utilizando un cuestionario con el que se evaluaron las variables sociodemográficas y ocupacionales; aplicando la escala de síntomas visión – computador (CVSS17), realizando evaluación médica y midiendo iluminación y distancia operario pantalla de computador y con los datos recolectados se realizó un análisis estadístico bivariado y se estableció la correlación entre las condiciones de iluminación, ángulo visual, discriminación de contrataste y agudeza visual; frente a la aparición de síntomas visuales asociados con el uso del computador. El análisis se llevó a cabo con medidas de tendencia central y dispersión y con el coeficiente de correlación paramétrico de Pearson o no-paramétrico de Spearman, previamente se evaluó la normalidad con la prueba de Shapiro-Wilk. Las pruebas estadísticas se evaluarán a un nivel de significancia del 5% (p<0.05). Resultados: El promedio de edad de los participantes en el estudio fue de 36,3 años con un rango entre los 22 y 57 años y en donde el género predominante fue el femenino con el 79,4%. Se encontraron síntomas visuales asociados al uso de pantalla de computador del 59,6%, siendo los más frecuentes la epifora (70,6%), fotofobia (67,6%) y ardor ocular (54,4%). Se reportó una correlación inversa significativa entre niveles de iluminación y manifestación de fotofobia (p=0.02; r= 0,262). Por otra parte no se encontró correlación significativa entre los síntomas referidos con ángulo de visión y agudeza visual y discriminación de contrastes. Conclusión: Las condiciones laborales de iluminación del grupo de estudio están relacionadas con la manifestación de fotofobia, Se encontró asociación entre síntomas visuales y variables sociodemográficas, específicamente con el género, fotofobia a pantalla, fatiga visual y fotofobia

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Tema 8: Pantallas de visualización de datos. Actividad voluntaria nº 5.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Fundamentos: El elevado número de personas que trabajando con ordenador utiliza lentes de contacto plantea la cuestión sobre si la suma de estos dos factores de riesgo para la salud visual puede originar un agravamiento del Síndrome Visual Informático. El objetivo de esta revisión es sintetizar el conocimiento científico sobre las alteraciones oculares y visuales relacionadas con la exposición a ordenador en usuarios de lentes de contacto. Métodos: Revisión de artículos científicos (2003-2013) en español e inglés, realizando una búsqueda bibliográfica, en Medline a través de PubMed y en Scopus. Resultados: La búsqueda inicial aportó 114 trabajos, después de aplicar criterios de inclusión/exclusión se incluyeron seis artículos. Todos ellos ponen de manifiesto que las alteraciones al utilizar el ordenador son más frecuentes en las personas usuarias de lentes de contacto, con prevalencias que oscilan de 95,0% al 16,9% que en las que no utilizan lentes de contacto, cuya prevalencia va del 57,5% al 9,9% y con una probabilidad cuatro veces mayor de padecer ojo seco [OR: 4,07 (IC 95%: 3,52-4,71)]. Conclusiones: Las personas usuarias de ordenador padecen más alteraciones oculares y visuales cuando además son usuarias de lentes de contacto, pero los estudios son escasos y poco contundentes. Se precisan nuevas investigaciones que analicen la influencia según los tipos de lentes y sus condiciones de uso, tanto en la sintomatología como en la calidad de la lágrima y la superficie ocular. Las lentes de hidrogel de silicona son las que se asocian a mayor confort.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

in RoboCup 2007: Robot Soccer World Cup XI

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Several studies have shown that people with disabilities benefit substantially from access to a means of independent mobility and assistive technology. Researchers are using technology originally developed for mobile robots to create easier to use wheelchairs. With this kind of technology people with disabilities can gain a degree of independence in performing daily life activities. In this work a computer vision system is presented, able to drive a wheelchair with a minimum number of finger commands. The user hand is detected and segmented with the use of a kinect camera, and fingertips are extracted from depth information, and used as wheelchair commands.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Hand gesture recognition for human computer interaction, being a natural way of human computer interaction, is an area of active research in computer vision and machine learning. This is an area with many different possible applications, giving users a simpler and more natural way to communicate with robots/systems interfaces, without the need for extra devices. So, the primary goal of gesture recognition research is to create systems, which can identify specific human gestures and use them to convey information or for device control. For that, vision-based hand gesture interfaces require fast and extremely robust hand detection, and gesture recognition in real time. In this study we try to identify hand features that, isolated, respond better in various situations in human-computer interaction. The extracted features are used to train a set of classifiers with the help of RapidMiner in order to find the best learner. A dataset with our own gesture vocabulary consisted of 10 gestures, recorded from 20 users was created for later processing. Experimental results show that the radial signature and the centroid distance are the features that when used separately obtain better results, with an accuracy of 91% and 90,1% respectively obtained with a Neural Network classifier. These to methods have also the advantage of being simple in terms of computational complexity, which make them good candidates for real-time hand gesture recognition.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Hand gestures are a powerful way for human communication, with lots of potential applications in the area of human computer interaction. Vision-based hand gesture recognition techniques have many proven advantages compared with traditional devices, giving users a simpler and more natural way to communicate with electronic devices. This work proposes a generic system architecture based in computer vision and machine learning, able to be used with any interface for human-computer interaction. The proposed solution is mainly composed of three modules: a pre-processing and hand segmentation module, a static gesture interface module and a dynamic gesture interface module. The experiments showed that the core of visionbased interaction systems could be the same for all applications and thus facilitate the implementation. For hand posture recognition, a SVM (Support Vector Machine) model was trained and used, able to achieve a final accuracy of 99.4%. For dynamic gestures, an HMM (Hidden Markov Model) model was trained for each gesture that the system could recognize with a final average accuracy of 93.7%. The proposed solution as the advantage of being generic enough with the trained models able to work in real-time, allowing its application in a wide range of human-machine applications. To validate the proposed framework two applications were implemented. The first one is a real-time system able to interpret the Portuguese Sign Language. The second one is an online system able to help a robotic soccer game referee judge a game in real time.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Vision-based hand gesture recognition is an area of active current research in computer vision and machine learning. Being a natural way of human interaction, it is an area where many researchers are working on, with the goal of making human computer interaction (HCI) easier and natural, without the need for any extra devices. So, the primary goal of gesture recognition research is to create systems, which can identify specific human gestures and use them, for example, to convey information. For that, vision-based hand gesture interfaces require fast and extremely robust hand detection, and gesture recognition in real time. Hand gestures are a powerful human communication modality with lots of potential applications and in this context we have sign language recognition, the communication method of deaf people. Sign lan- guages are not standard and universal and the grammars differ from country to coun- try. In this paper, a real-time system able to interpret the Portuguese Sign Language is presented and described. Experiments showed that the system was able to reliably recognize the vowels in real-time, with an accuracy of 99.4% with one dataset of fea- tures and an accuracy of 99.6% with a second dataset of features. Although the im- plemented solution was only trained to recognize the vowels, it is easily extended to recognize the rest of the alphabet, being a solid foundation for the development of any vision-based sign language recognition user interface system.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Hand gestures are a powerful way for human communication, with lots of potential applications in the area of human computer interaction. Vision-based hand gesture recognition techniques have many proven advantages compared with traditional devices, giving users a simpler and more natural way to communicate with electronic devices. This work proposes a generic system architecture based in computer vision and machine learning, able to be used with any interface for humancomputer interaction. The proposed solution is mainly composed of three modules: a pre-processing and hand segmentation module, a static gesture interface module and a dynamic gesture interface module. The experiments showed that the core of vision-based interaction systems can be the same for all applications and thus facilitate the implementation. In order to test the proposed solutions, three prototypes were implemented. For hand posture recognition, a SVM model was trained and used, able to achieve a final accuracy of 99.4%. For dynamic gestures, an HMM model was trained for each gesture that the system could recognize with a final average accuracy of 93.7%. The proposed solution as the advantage of being generic enough with the trained models able to work in real-time, allowing its application in a wide range of human-machine applications.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

We present a computer vision system that associates omnidirectional vision with structured light with the aim of obtaining depth information for a 360 degrees field of view. The approach proposed in this article combines an omnidirectional camera with a panoramic laser projector. The article shows how the sensor is modelled and its accuracy is proved by means of experimental results. The proposed sensor provides useful information for robot navigation applications, pipe inspection, 3D scene modelling etc