4 resultados para Annotation informatisée

em AMS Tesi di Laurea - Alm@DL - Università di Bologna


Relevância:

10.00% 10.00%

Publicador:

Resumo:

Vision systems are powerful tools playing an increasingly important role in modern industry, to detect errors and maintain product standards. With the enlarged availability of affordable industrial cameras, computer vision algorithms have been increasingly applied in industrial manufacturing processes monitoring. Until a few years ago, industrial computer vision applications relied only on ad-hoc algorithms designed for the specific object and acquisition setup being monitored, with a strong focus on co-designing the acquisition and processing pipeline. Deep learning has overcome these limits providing greater flexibility and faster re-configuration. In this work, the process to be inspected consists in vials’ pack formation entering a freeze-dryer, which is a common scenario in pharmaceutical active ingredient packaging lines. To ensure that the machine produces proper packs, a vision system is installed at the entrance of the freeze-dryer to detect eventual anomalies with execution times compatible with the production specifications. Other constraints come from sterility and safety standards required in pharmaceutical manufacturing. This work presents an overview about the production line, with particular focus on the vision system designed, and about all trials conducted to obtain the final performance. Transfer learning, alleviating the requirement for a large number of training data, combined with data augmentation methods, consisting in the generation of synthetic images, were used to effectively increase the performances while reducing the cost of data acquisition and annotation. The proposed vision algorithm is composed by two main subtasks, designed respectively to vials counting and discrepancy detection. The first one was trained on more than 23k vials (about 300 images) and tested on 5k more (about 75 images), whereas 60 training images and 52 testing images were used for the second one.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Unmanned Aerial Vehicle (UAVs) equipped with cameras have been fast deployed to a wide range of applications, such as smart cities, agriculture or search and rescue applications. Even though UAV datasets exist, the amount of open and quality UAV datasets is limited. So far, we want to overcome this lack of high quality annotation data by developing a simulation framework for a parametric generation of synthetic data. The framework accepts input via a serializable format. The input specifies which environment preset is used, the objects to be placed in the environment along with their position and orientation as well as additional information such as object color and size. The result is an environment that is able to produce UAV typical data: RGB image from the UAVs camera, altitude, roll, pitch and yawn of the UAV. Beyond the image generation process, we improve the resulting image data photorealism by using Synthetic-To-Real transfer learning methods. Transfer learning focuses on storing knowledge gained while solving one problem and applying it to a different - although related - problem. This approach has been widely researched in other affine fields and results demonstrate it to be an interesing area to investigate. Since simulated images are easy to create and synthetic-to-real translation has shown good quality results, we are able to generate pseudo-realistic images. Furthermore, object labels are inherently given, so we are capable of extending the already existing UAV datasets with realistic quality images and high resolution meta-data. During the development of this thesis we have been able to produce a result of 68.4% on UAVid. This can be considered a new state-of-art result on this dataset.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Wound management is a fundamental task in standard clinical practice. Automated solutions already exist for humans, but there is a lack of applications on wound management for pets. The importance of a precise and efficient wound assessment is helpful to improve diagnosis and to increase the effectiveness of treatment plans for the chronic wounds. The goal of the research was to propose an automated pipeline capable of segmenting natural light-reflected wound images of animals. Two datasets composed by light-reflected images were used in this work: Deepskin dataset, 1564 human wound images obtained during routine dermatological exams, with 145 manual annotated images; Petwound dataset, a set of 290 wound photos of dogs and cats with 0 annotated images. Two implementations of U-Net Convolutioal Neural Network model were proposed for the automated segmentation. Active Semi-Supervised Learning techniques were applied for human-wound images to perform segmentation from 10% of annotated images. Then the same models were trained, via Transfer Learning, adopting an Active Semi- upervised Learning to unlabelled animal-wound images. The combination of the two training strategies proved their effectiveness in generating large amounts of annotated samples (94% of Deepskin, 80% of PetWound) with the minimal human intervention. The correctness of automated segmentation were evaluated by clinical experts at each round of training thus we can assert that the results obtained in this thesis stands as a reliable solution to perform a correct wound image segmentation. The use of Transfer Learning and Active Semi-Supervied Learning allows to minimize labelling effort from clinicians, even requiring no starting manual annotation at all. Moreover the performances of the model with limited number of parameters suggest the implementation of smartphone-based application to this topic, helping the future standardization of light-reflected images as acknowledge medical images.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This study investigated the coralligenous reefs' benthic assemblages at 6 sites off Chioggia, in the northern Adriatic Sea, comparing 2 different methods of analysis of photographic samples: the grid method (overlapping a grid of 400 cells) and the random point method (random distribution of 100 points on the photo). For the first method, taxonomic recognition and the percentage coverage estimations were performed manually using photoQuad software. In the second, CoralNet semi-automated web-based annotation system was applied. This allows for assisted and supervised identification, the success rate of which gradually improves after initial software training. The results obtained with the two methods of analysing photographic samples are slightly different. The random points method gives lower species richness values and some differences in coverage estimations; all of this is reflected in the calculation of the biotic index. NAMBER values are significantly lower with the random points method and provide locally different classifications (3 out of 6 sites). However, the results obtained with the two methods are closely related to each other and depict a similar spatial trend. These results rise caution in applying different, albeit similar, methods in the analysis of benthic assemblages aimed to environmental quality assessment.