940 resultados para Pick-by-Vision
Resumo:
Surgical interventions are usually performed in an operation room; however, access to the information by the medical team members during the intervention is limited. While in conversations with the medical staff, we observed that they attach significant importance to the improvement of the information and communication direct access by queries during the process in real time. It is due to the fact that the procedure is rather slow and there is lack of interaction with the systems in the operation room. These systems can be integrated on the Cloud adding new functionalities to the existing systems the medical expedients are processed. Therefore, such a communication system needs to be built upon the information and interaction access specifically designed and developed to aid the medical specialists. Copyright 2014 ACM.
Resumo:
During the 117th General Assembly of South Carolina, the Commission for Minority Affairs introduced the Student Achievement and Vision Education (SAVE) Proviso. The Proviso was so named to emphasize the importance of addressing student achievement by closing the gap that exists between majority and minority student performance and visioning students toward educational success through the implementation of the Education and Economic Development Act. This report documents the progress to date on the study; the impact of budget cuts on the CMA and complying agencies; the CMA's ability to complete the comprehensive study document using most current information; and the need for further study beyond February 2009.
Resumo:
The neurons in the primary visual cortex that respond to the orientation of visual stimuli were discovered in the late 1950s (Hubel, D.H. & Wiesel, T.N. 1959. J. Physiol. 148:574-591) but how they achieve this response is poorly understood. Recently, experiments have demonstrated that the visual cortex may use the image processing techniques of cross or auto-correlation to detect the streaks in random dot patterns (Barlow, H. & Berry, D.L. 2010. Proc. R. Soc. B. 278: 2069-2075). These experiments made use of sinusoidally modulated random dot patterns and of the so-called Glass patterns - where randomly positioned dot pairs are oriented in a parallel configuration (Glass, L. 1969. Nature. 223: 578-580). The image processing used by the visual cortex could be inferred from how the threshold of detection of these patterns in the presence of random noise varied as a function of the dot density in the patterns. In the present study, the detection thresholds have been measured for other types of patterns including circular, hyperbolic, spiral and radial Glass patterns and an indication of the type of image processing (cross or auto-correlation) by the visual cortex is presented. As a result, it is hoped that this study will contribute to an understanding of what David Marr called the ‘computational goal’ of the primary visual cortex (Marr, D. 1982. Vision: A Computational Investigation into the Human Representation and Processing of Visual Information. New York: Freeman.)
Resumo:
Nowadays robotic applications are widespread and most of the manipulation tasks are efficiently solved. However, Deformable-Objects (DOs) still represent a huge limitation for robots. The main difficulty in DOs manipulation is dealing with the shape and dynamics uncertainties, which prevents the use of model-based approaches (since they are excessively computationally complex) and makes sensory data difficult to interpret. This thesis reports the research activities aimed to address some applications in robotic manipulation and sensing of Deformable-Linear-Objects (DLOs), with particular focus to electric wires. In all the works, a significant effort was made in the study of an effective strategy for analyzing sensory signals with various machine learning algorithms. In the former part of the document, the main focus concerns the wire terminals, i.e. detection, grasping, and insertion. First, a pipeline that integrates vision and tactile sensing is developed, then further improvements are proposed for each module. A novel procedure is proposed to gather and label massive amounts of training images for object detection with minimal human intervention. Together with this strategy, we extend a generic object detector based on Convolutional-Neural-Networks for orientation prediction. The insertion task is also extended by developing a closed-loop control capable to guide the insertion of a longer and curved segment of wire through a hole, where the contact forces are estimated by means of a Recurrent-Neural-Network. In the latter part of the thesis, the interest shifts to the DLO shape. Robotic reshaping of a DLO is addressed by means of a sequence of pick-and-place primitives, while a decision making process driven by visual data learns the optimal grasping locations exploiting Deep Q-learning and finds the best releasing point. The success of the solution leverages on a reliable interpretation of the DLO shape. For this reason, further developments are made on the visual segmentation.
Resumo:
Industrial robots are both versatile and high performant, enabling the flexible automation typical of the modern Smart Factories. For safety reasons, however, they must be relegated inside closed fences and/or virtual safety barriers, to keep them strictly separated from human operators. This can be a limitation in some scenarios in which it is useful to combine the human cognitive skill with the accuracy and repeatability of a robot, or simply to allow a safe coexistence in a shared workspace. Collaborative robots (cobots), on the other hand, are intrinsically limited in speed and power in order to share workspace and tasks with human operators, and feature the very intuitive hand guiding programming method. Cobots, however, cannot compete with industrial robots in terms of performance, and are thus useful only in a limited niche, where they can actually bring an improvement in productivity and/or in the quality of the work thanks to their synergy with human operators. The limitations of both the pure industrial and the collaborative paradigms can be overcome by combining industrial robots with artificial vision. In particular, vision can be exploited for a real-time adjustment of the pre-programmed task-based robot trajectory, by means of the visual tracking of dynamic obstacles (e.g. human operators). This strategy allows the robot to modify its motion only when necessary, thus maintain a high level of productivity but at the same time increasing its versatility. Other than that, vision offers the possibility of more intuitive programming paradigms for the industrial robots as well, such as the programming by demonstration paradigm. These possibilities offered by artificial vision enable, as a matter of fact, an efficacious and promising way of achieving human-robot collaboration, which has the advantage of overcoming the limitations of both the previous paradigms yet keeping their strengths.
Resumo:
One of the most visionary goals of Artificial Intelligence is to create a system able to mimic and eventually surpass the intelligence observed in biological systems including, ambitiously, the one observed in humans. The main distinctive strength of humans is their ability to build a deep understanding of the world by learning continuously and drawing from their experiences. This ability, which is found in various degrees in all intelligent biological beings, allows them to adapt and properly react to changes by incrementally expanding and refining their knowledge. Arguably, achieving this ability is one of the main goals of Artificial Intelligence and a cornerstone towards the creation of intelligent artificial agents. Modern Deep Learning approaches allowed researchers and industries to achieve great advancements towards the resolution of many long-standing problems in areas like Computer Vision and Natural Language Processing. However, while this current age of renewed interest in AI allowed for the creation of extremely useful applications, a concerningly limited effort is being directed towards the design of systems able to learn continuously. The biggest problem that hinders an AI system from learning incrementally is the catastrophic forgetting phenomenon. This phenomenon, which was discovered in the 90s, naturally occurs in Deep Learning architectures where classic learning paradigms are applied when learning incrementally from a stream of experiences. This dissertation revolves around the Continual Learning field, a sub-field of Machine Learning research that has recently made a comeback following the renewed interest in Deep Learning approaches. This work will focus on a comprehensive view of continual learning by considering algorithmic, benchmarking, and applicative aspects of this field. This dissertation will also touch on community aspects such as the design and creation of research tools aimed at supporting Continual Learning research, and the theoretical and practical aspects concerning public competitions in this field.
Resumo:
This work aims to develop a neurogeometric model of stereo vision, based on cortical architectures involved in the problem of 3D perception and neural mechanisms generated by retinal disparities. First, we provide a sub-Riemannian geometry for stereo vision, inspired by the work on the stereo problem by Zucker (2006), and using sub-Riemannian tools introduced by Citti-Sarti (2006) for monocular vision. We present a mathematical interpretation of the neural mechanisms underlying the behavior of binocular cells, that integrate monocular inputs. The natural compatibility between stereo geometry and neurophysiological models shows that these binocular cells are sensitive to position and orientation. Therefore, we model their action in the space R3xS2 equipped with a sub-Riemannian metric. Integral curves of the sub-Riemannian structure model neural connectivity and can be related to the 3D analog of the psychophysical association fields for the 3D process of regular contour formation. Then, we identify 3D perceptual units in the visual scene: they emerge as a consequence of the random cortico-cortical connection of binocular cells. Considering an opportune stochastic version of the integral curves, we generate a family of kernels. These kernels represent the probability of interaction between binocular cells, and they are implemented as facilitation patterns to define the evolution in time of neural population activity at a point. This activity is usually modeled through a mean field equation: steady stable solutions lead to consider the associated eigenvalue problem. We show that three-dimensional perceptual units naturally arise from the discrete version of the eigenvalue problem associated to the integro-differential equation of the population activity.
Resumo:
Water Distribution Networks (WDNs) play a vital importance rule in communities, ensuring well-being band supporting economic growth and productivity. The need for greater investment requires design choices will impact on the efficiency of management in the coming decades. This thesis proposes an algorithmic approach to address two related problems:(i) identify the fundamental asset of large WDNs in terms of main infrastructure;(ii) sectorize large WDNs into isolated sectors in order to respect the minimum service to be guaranteed to users. Two methodologies have been developed to meet these objectives and subsequently they were integrated to guarantee an overall process which allows to optimize the sectorized configuration of WDN taking into account the needs to integrated in a global vision the two problems (i) and (ii). With regards to the problem (i), the methodology developed introduces the concept of primary network to give an answer with a dual approach, of connecting main nodes of WDN in terms of hydraulic infrastructures (reservoirs, tanks, pumps stations) and identifying hypothetical paths with the minimal energy losses. This primary network thus identified can be used as an initial basis to design the sectors. The sectorization problem (ii) has been faced using optimization techniques by the development of a new dedicated Tabu Search algorithm able to deal with real case studies of WDNs. For this reason, three new large WDNs models have been developed in order to test the capabilities of the algorithm on different and complex real cases. The developed methodology also allows to automatically identify the deficient parts of the primary network and dynamically includes new edges in order to support a sectorized configuration of the WDN. The application of the overall algorithm to the new real case studies and to others from literature has given applicable solutions even in specific complex situations.
Resumo:
Vision systems are powerful tools playing an increasingly important role in modern industry, to detect errors and maintain product standards. With the enlarged availability of affordable industrial cameras, computer vision algorithms have been increasingly applied in industrial manufacturing processes monitoring. Until a few years ago, industrial computer vision applications relied only on ad-hoc algorithms designed for the specific object and acquisition setup being monitored, with a strong focus on co-designing the acquisition and processing pipeline. Deep learning has overcome these limits providing greater flexibility and faster re-configuration. In this work, the process to be inspected consists in vials’ pack formation entering a freeze-dryer, which is a common scenario in pharmaceutical active ingredient packaging lines. To ensure that the machine produces proper packs, a vision system is installed at the entrance of the freeze-dryer to detect eventual anomalies with execution times compatible with the production specifications. Other constraints come from sterility and safety standards required in pharmaceutical manufacturing. This work presents an overview about the production line, with particular focus on the vision system designed, and about all trials conducted to obtain the final performance. Transfer learning, alleviating the requirement for a large number of training data, combined with data augmentation methods, consisting in the generation of synthetic images, were used to effectively increase the performances while reducing the cost of data acquisition and annotation. The proposed vision algorithm is composed by two main subtasks, designed respectively to vials counting and discrepancy detection. The first one was trained on more than 23k vials (about 300 images) and tested on 5k more (about 75 images), whereas 60 training images and 52 testing images were used for the second one.
Resumo:
Artificial Intelligence is reshaping the field of fashion industry in different ways. E-commerce retailers exploit their data through AI to enhance their search engines, make outfit suggestions and forecast the success of a specific fashion product. However, it is a challenging endeavour as the data they possess is huge, complex and multi-modal. The most common way to search for fashion products online is by matching keywords with phrases in the product's description which are often cluttered, inadequate and differ across collections and sellers. A customer may also browse an online store's taxonomy, although this is time-consuming and doesn't guarantee relevant items. With the advent of Deep Learning architectures, particularly Vision-Language models, ad-hoc solutions have been proposed to model both the product image and description to solve this problems. However, the suggested solutions do not exploit effectively the semantic or syntactic information of these modalities, and the unique qualities and relations of clothing items. In this work of thesis, a novel approach is proposed to address this issues, which aims to model and process images and text descriptions as graphs in order to exploit the relations inside and between each modality and employs specific techniques to extract syntactic and semantic information. The results obtained show promising performances on different tasks when compared to the present state-of-the-art deep learning architectures.
Resumo:
The micellization of a homologous series of zwitterionic surfactants, a group of sulfobetaines, was studied using isothermal titration calorimetry (ITC) in the temperature range from 15 to 65 °C. The increase in both temperature and the alkyl chain length leads to more negative values of ΔGmic(0) , favoring the micellization. The entropic term (ΔSmic(0)) is predominant at lower temperatures, and above ca. 55-65 °C, the enthalpic term (ΔHmic(0)) becomes prevalent, figuring a jointly driven process as the temperature increases. The interaction of these sulfobetaines with different polymers was also studied by ITC. Among the polymers studied, only two induced the formation of micellar aggregates at lower surfactant concentration: poly(acrylic acid), PAA, probably due to the formation of hydrogen bonds between the carboxylic group of the polymer and the sulfonate group of the surfactant, and poly(sodium 4-styrenesulfonate), PSS, probably due to the incorporation of the hydrophobic styrene group into the micelles. The prevalence of the hydrophobic and not the electrostatic contributions to the interaction between sulfobetaine and PSS was confirmed by an increased interaction enthalpy in the presence of electrolytes (NaCl) and by the observation of a significant temperature dependence, the latter consistent with the proposed removal of hydrophobic groups from water.
Resumo:
Basic phospholipases A2 (PLA2) are toxic and induce a wide spectrum of pharmacological effects, although the acidic enzyme types are not lethal or cause low lethality. Therefore, it is challenging to elucidate the mechanism of action of acidic phospholipases. This study used the acidic non-toxic Ba SpII RP4 PLA2 from Bothrops alternatus as an antigen to develop anti-PLA2 IgG antibodies in rabbits and used in vivo assays to examine the changes in crude venom when pre-incubated with these antibodies. Using Ouchterlony and western blot analyses on B. alternatus venom, we examined the specificity and sensitivity of phospholipase A2 recognition by the specific antibodies (anti-PLA2 IgG). Neutralisation assays using a non-toxic PLA2 antigen revealed unexpected results. The (indirect) haemolytic activity of whole venom was completely inhibited, and all catalytically active phospholipases A2 were blocked. Myotoxicity and lethality were reduced when the crude venom was pre-incubated with anti-PLA2 immunoglobulins. CK levels in the skeletal muscle were significantly reduced at 6 h, and the muscular damage was more significant at this time-point compared to 3 and 12 h. When four times the LD50 was used (224 μg), half the animals treated with the venom-anti PLA2 IgG mixture survived after 48 h. All assays performed with the specific antibodies revealed that Ba SpII RP4 PLA2 had a synergistic effect on whole-venom toxicity. IgG antibodies against the venom of the Argentinean species B. alternatus represent a valuable tool for elucidation of the roles of acidic PLA2 that appear to have purely digestive roles and for further studies on immunotherapy and snake envenoming in affected areas in Argentina and Brazil.
Resumo:
Diabetic Retinopathy (DR) is a complication of diabetes that can lead to blindness if not readily discovered. Automated screening algorithms have the potential to improve identification of patients who need further medical attention. However, the identification of lesions must be accurate to be useful for clinical application. The bag-of-visual-words (BoVW) algorithm employs a maximum-margin classifier in a flexible framework that is able to detect the most common DR-related lesions such as microaneurysms, cotton-wool spots and hard exudates. BoVW allows to bypass the need for pre- and post-processing of the retinographic images, as well as the need of specific ad hoc techniques for identification of each type of lesion. An extensive evaluation of the BoVW model, using three large retinograph datasets (DR1, DR2 and Messidor) with different resolution and collected by different healthcare personnel, was performed. The results demonstrate that the BoVW classification approach can identify different lesions within an image without having to utilize different algorithms for each lesion reducing processing time and providing a more flexible diagnostic system. Our BoVW scheme is based on sparse low-level feature detection with a Speeded-Up Robust Features (SURF) local descriptor, and mid-level features based on semi-soft coding with max pooling. The best BoVW representation for retinal image classification was an area under the receiver operating characteristic curve (AUC-ROC) of 97.8% (exudates) and 93.5% (red lesions), applying a cross-dataset validation protocol. To assess the accuracy for detecting cases that require referral within one year, the sparse extraction technique associated with semi-soft coding and max pooling obtained an AUC of 94.2 ± 2.0%, outperforming current methods. Those results indicate that, for retinal image classification tasks in clinical practice, BoVW is equal and, in some instances, surpasses results obtained using dense detection (widely believed to be the best choice in many vision problems) for the low-level descriptors.
Resumo:
Valproic acid (VPA) and trichostatin A (TSA) are known histone deacetylase inhibitors (HDACIs) with epigenetic activity that affect chromatin supra-organization, nuclear architecture, and cellular proliferation, particularly in tumor cells. In this study, chromatin remodeling with effects extending to heterochromatic areas was investigated by image analysis in non-transformed NIH 3T3 cells treated for different periods with different doses of VPA and TSA under conditions that indicated no loss of cell viability. Image analysis revealed chromatin decondensation that affected not only euchromatin but also heterochromatin, concomitant with a decreased activity of histone deacetylases and a general increase in histone H3 acetylation. Heterochromatin protein 1-α (HP1-α), identified immunocytochemically, was depleted from the pericentromeric heterochromatin following exposure to both HDACIs. Drastic changes affecting cell proliferation and micronucleation but not alteration in CCND2 expression and in ratios of Bcl-2/Bax expression and cell death occurred following a 48-h exposure of the NIH 3T3 cells particularly in response to higher doses of VPA. Our results demonstrated that even low doses of VPA (0.05 mM) and TSA (10 ng/ml) treatments for 1 h can affect chromatin structure, including that of the heterochromatin areas, in non-transformed cells. HP1-α depletion, probably related to histone demethylation at H3K9me3, in addition to the effect of VPA and TSA on histone H3 acetylation, is induced on NIH 3T3 cells. Despite these facts, alterations in cell proliferation and micronucleation, possibly depending on mitotic spindle defects, require a longer exposure to higher doses of VPA and TSA.
Resumo:
Yellowing is an undesirable phenomenon that is common in people with white and grey hair. Because white hair has no melanin, the pigment responsible for hair colour, the effects of photodegradation are more visible in this type of hair. The origin of yellowing and its relation to photodegradation processes are not properly established, and many questions remain open in this field. In this work, the photodegradation of grey hair was investigated as a function of the wavelength of incident radiation, and its ultrastructure was determined, always comparing the results obtained for the white and black fibres present in grey hair with the results of white wool. The results presented herein indicate that the photobehaviour of grey hair irradiated with a mercury lamp or with solar radiation is dependent on the wavelength range of the incident radiation and on the initial shade of yellow in the sample. Two types of grey hair were used: (1) blended grey hair (more yellow) and (2) grey hair from a single-donor (less yellow). After exposure to a full-spectrum mercury lamp for 200 h, the blended white hair turned less yellow (the yellow-blue difference, Db(*) becomes negative, Db(*)=-6), whereas the white hair from the single-donor turned slightly yellower (Db(*)=2). In contrast, VIS+IR irradiation resulted in bleaching in both types of hair, whereas a thermal treatment (at 81 °C) caused yellowing of both types of hair, resulting in a Db(*)=3 for blended white hair and Db(*)=9 for single-donor hair. The identity of the yellow chromophores was investigated by UV-Vis spectroscopy. The results obtained with this technique were contradictory, however, and it was not possible to obtain a simple correlation between the sample shade of yellow and the absorption spectra. In addition, the results are discussed in terms of the morphology differences between the pigmented and non-pigmented parts of grey hair, the yellowing and bleaching effects of grey hair, and the occurrence of dark-follow reactions.