830 resultados para Vision computationnelle
Resumo:
Mottling is one of the key defects in offset-printing. Mottling can be defined as unwanted unevenness of print. In this work, diameter of a mottle spot is defined between 0.5-10.0 mm. There are several types of mottling, but the reason behind the problem is still not fully understood. Several commercial machine vision products for the evaluation of print unevenness have been presented. Two of these methods used in these products have been implemented in this thesis. The one is the cluster method and the other is the band-pass method. The properties of human vision system have been taken into account in the implementation of these two methods. An index produced by the cluster method is a weighted sum of the number of found spots, and an index produced by band-pass method is a weighted sum of coefficients of variations of gray-levels for each spatial band. Both methods produce larger indices for visually poor samples, so they can discern good samples from the poor ones. The difference between the indices for good and poor samples is slightly larger produced by the cluster method. 11 However, without the samples evaluated by human experts, the goodness of these results is still questionable. This comparison will be left to the next phase of the project.
Resumo:
We conducted a study assessing the quality and speed of intubation between the Airtraq with its new iPhone AirView app and the King Vision in a manikin. The primary endpoint was reduction of time needed for intubation. Secondary endpoints included times necessary for intubation. 30 anaesthetists randomly performed 3 intubations with each device on a difficult airway manikin. Participants had a professional experience of 12 years: 60.0% possessed the Airtraq in their hospital, 46.7% the King Vision, and 20.0% both. Median time difference [IQR] to identify glottis (1.1 [-1.3; 3.9] P = 0.019), for tube insertion (2.1 [-2.6; 9.4] P = 0.002) and lung ventilation (2.8 [-2.4; 11.5] P = 0.001), was shorter with the Airtraq-AirView. Median time for glottis visualization was significantly shorter with the Airtraq-AirView (5.3 [4.0; 8.4] versus 6.4 [4.6; 9.1]). Cormack Lehane before intubation was better with the King Vision (P = 0.03); no difference was noted during intubation, for subjective device insertion or quality of epiglottis visualisation. Assessment of tracheal tube insertion was better with the Airtraq-AirView. The Airtraq-AirView allows faster identification of the landmarks and intubation in a difficult airway manikin, while clinical relevance remains to be studied. Anaesthetists assessed the intubation better with the Airtraq-AirView.
Resumo:
Disease-causing variants of a large number of genes trigger inherited retinal degeneration leading to photoreceptor loss. Because cones are essential for daylight and central vision such as reading, mobility, and face recognition, this review focuses on a variety of animal models for cone diseases. The pertinence of using these models to reveal genotype/phenotype correlations and to evaluate new therapeutic strategies is discussed. Interestingly, several large animal models recapitulate human diseases and can serve as a strong base from which to study the biology of disease and to assess the scale-up of new therapies. Examples of innovative approaches will be presented such as lentiviral-based transgenesis in pigs and adeno-associated virus (AAV)-gene transfer into the monkey eye to investigate the neural circuitry plasticity of the visual system. The models reported herein permit the exploration of common mechanisms that exist between different species and the identification and highlighting of pathways that may be specific to primates, including humans.
Resumo:
Following their detection and seizure by police and border guard authorities, false identity and travel documents are usually scanned, producing digital images. This research investigates the potential of these images to classify false identity documents, highlight links between documents produced by a same modus operandi or same source, and thus support forensic intelligence efforts. Inspired by previous research work about digital images of Ecstasy tablets, a systematic and complete method has been developed to acquire, collect, process and compare images of false identity documents. This first part of the article highlights the critical steps of the method and the development of a prototype that processes regions of interest extracted from images. Acquisition conditions have been fine-tuned in order to optimise reproducibility and comparability of images. Different filters and comparison metrics have been evaluated and the performance of the method has been assessed using two calibration and validation sets of documents, made up of 101 Italian driving licenses and 96 Portuguese passports seized in Switzerland, among which some were known to come from common sources. Results indicate that the use of Hue and Edge filters or their combination to extract profiles from images, and then the comparison of profiles with a Canberra distance-based metric provides the most accurate classification of documents. The method appears also to be quick, efficient and inexpensive. It can be easily operated from remote locations and shared amongst different organisations, which makes it very convenient for future operational applications. The method could serve as a first fast triage method that may help target more resource-intensive profiling methods (based on a visual, physical or chemical examination of documents for instance). Its contribution to forensic intelligence and its application to several sets of false identity documents seized by police and border guards will be developed in a forthcoming article (part II).
Resumo:
This thesis presents the calibration and comparison of two systems, a machine vision system that uses 3 channel RGB images and a line scanning spectral system. Calibration. is the process of checking and adjusting the accuracy of a measuring instrument by comparing it with standards. For the RGB system self-calibrating methods for finding various parameters of the imaging device were developed. Color calibration was done and the colors produced by the system were compared to the known colors values of the target. Software drivers for the Sony Robot were also developed and a mechanical part to connect a camera to the robot was also designed. For the line scanning spectral system, methods for the calibrating the alignment of the system and the measurement of the dimensions of the line scanned by the system were developed. Color calibration of the spectral system is also presented.
Resumo:
Sensor-based robot control allows manipulation in dynamic environments with uncertainties. Vision is a versatile low-cost sensory modality, but low sample rate, high sensor delay and uncertain measurements limit its usability, especially in strongly dynamic environments. Force is a complementary sensory modality allowing accurate measurements of local object shape when a tooltip is in contact with the object. In multimodal sensor fusion, several sensors measuring different modalities are combined to give a more accurate estimate of the environment. As force and vision are fundamentally different sensory modalities not sharing a common representation, combining the information from these sensors is not straightforward. In this thesis, methods for fusing proprioception, force and vision together are proposed. Making assumptions of object shape and modeling the uncertainties of the sensors, the measurements can be fused together in an extended Kalman filter. The fusion of force and visual measurements makes it possible to estimate the pose of a moving target with an end-effector mounted moving camera at high rate and accuracy. The proposed approach takes the latency of the vision system into account explicitly, to provide high sample rate estimates. The estimates also allow a smooth transition from vision-based motion control to force control. The velocity of the end-effector can be controlled by estimating the distance to the target by vision and determining the velocity profile giving rapid approach and minimal force overshoot. Experiments with a 5-degree-of-freedom parallel hydraulic manipulator and a 6-degree-of-freedom serial manipulator show that integration of several sensor modalities can increase the accuracy of the measurements significantly.
Resumo:
Research on color difference evaluation has been active in recent thirty years. Several color difference formulas were developed for industrial applications. The aims of this thesis are to develop the color density which is denoted by comb g and to propose the color density based chromaticity difference formulas. Color density is derived from the discrimination ellipse parameters and color positions in the xy , xyY and CIELAB color spaces, and the color based chromaticity difference formulas are compared with the line element formulas and CIE 2000 color difference formulas. As a result of the thesis, color density represents the perceived color difference accurately, and it could be used to characterize a color by the attribute of perceived color difference from this color.