5 resultados para colour-based segmentation
em AMS Tesi di Laurea - Alm@DL - Università di Bologna
Resumo:
In this work, a colorimetric indicator for food oxidation based on the detection of hexanal in gas-phase, has been developed. In fact, in recent years, the food packaging industry has evolved towards new generation of packaging, like active and intelligent. According to literature (Pangloli P. et al. 2002), hexanal is the main product of a fatty acid oxidation: the linoleic acid. So, it was chosen to analyse two kinds of potato chips, fried in two different oils with high concentration of linoleic acid: olive oil and sunflower oil. Five different formulas were prepared and their colour change when exposed to hexanal in gas phase was evaluated. The formulas evaluations were first conducted on filter paper labels. The next step was to select the thickener to add to the formula, in order to coat a polypropylene film, more appropriate than the filter paper for a production at industrial scale. Three kinds of thickeners were tested: a cellulose derivative, an ethylene vinyl-alcohol and a polyvinyl alcohol. To obtain the final labels with the autoadhesive layer, the polypropylene film with the selected formula and thickener was coat with a water based adhesive. For both filter paper and polypropylene labels, with and without autoadhesive layer, the detection limit and the detection time were measured. For the selected formula on filter paper labels, the stability was evaluated, when conserved on the dark or on the light, in order to determine the storage time. Both potato chips samples, stocked at the same conditions, were analysed using an optimised Headspace-Solid Phase Microextraction-Gas Chromatography-Mass Spectrometry (HS-SPME-GC-MS) method, in order to determine the concentration of volatilized hexanal. With the aim to establish if the hexanal can be considered as an indicator of the end of potato chips shelf life, sensory evaluation was conducted each day of HS-SPME-GC-MS analysis.
Resumo:
The aim of this thesis project is to automatically localize HCC tumors in the human liver and subsequently predict if the tumor will undergo microvascular infiltration (MVI), the initial stage of metastasis development. The input data for the work have been partially supplied by Sant'Orsola Hospital and partially downloaded from online medical databases. Two Unet models have been implemented for the automatic segmentation of the livers and the HCC malignancies within it. The segmentation models have been evaluated with the Intersection-over-Union and the Dice Coefficient metrics. The outcomes obtained for the liver automatic segmentation are quite good (IOU = 0.82; DC = 0.35); the outcomes obtained for the tumor automatic segmentation (IOU = 0.35; DC = 0.46) are, instead, affected by some limitations: it can be state that the algorithm is almost always able to detect the location of the tumor, but it tends to underestimate its dimensions. The purpose is to achieve the CT images of the HCC tumors, necessary for features extraction. The 14 Haralick features calculated from the 3D-GLCM, the 120 Radiomic features and the patients' clinical information are collected to build a dataset of 153 features. Now, the goal is to build a model able to discriminate, based on the features given, the tumors that will undergo MVI and those that will not. This task can be seen as a classification problem: each tumor needs to be classified either as “MVI positive” or “MVI negative”. Techniques for features selection are implemented to identify the most descriptive features for the problem at hand and then, a set of classification models are trained and compared. Among all, the models with the best performances (around 80-84% ± 8-15%) result to be the XGBoost Classifier, the SDG Classifier and the Logist Regression models (without penalization and with Lasso, Ridge or Elastic Net penalization).
Resumo:
Unmanned Aerial Vehicle (UAVs) equipped with cameras have been fast deployed to a wide range of applications, such as smart cities, agriculture or search and rescue applications. Even though UAV datasets exist, the amount of open and quality UAV datasets is limited. So far, we want to overcome this lack of high quality annotation data by developing a simulation framework for a parametric generation of synthetic data. The framework accepts input via a serializable format. The input specifies which environment preset is used, the objects to be placed in the environment along with their position and orientation as well as additional information such as object color and size. The result is an environment that is able to produce UAV typical data: RGB image from the UAVs camera, altitude, roll, pitch and yawn of the UAV. Beyond the image generation process, we improve the resulting image data photorealism by using Synthetic-To-Real transfer learning methods. Transfer learning focuses on storing knowledge gained while solving one problem and applying it to a different - although related - problem. This approach has been widely researched in other affine fields and results demonstrate it to be an interesing area to investigate. Since simulated images are easy to create and synthetic-to-real translation has shown good quality results, we are able to generate pseudo-realistic images. Furthermore, object labels are inherently given, so we are capable of extending the already existing UAV datasets with realistic quality images and high resolution meta-data. During the development of this thesis we have been able to produce a result of 68.4% on UAVid. This can be considered a new state-of-art result on this dataset.
Resumo:
Wound management is a fundamental task in standard clinical practice. Automated solutions already exist for humans, but there is a lack of applications on wound management for pets. The importance of a precise and efficient wound assessment is helpful to improve diagnosis and to increase the effectiveness of treatment plans for the chronic wounds. The goal of the research was to propose an automated pipeline capable of segmenting natural light-reflected wound images of animals. Two datasets composed by light-reflected images were used in this work: Deepskin dataset, 1564 human wound images obtained during routine dermatological exams, with 145 manual annotated images; Petwound dataset, a set of 290 wound photos of dogs and cats with 0 annotated images. Two implementations of U-Net Convolutioal Neural Network model were proposed for the automated segmentation. Active Semi-Supervised Learning techniques were applied for human-wound images to perform segmentation from 10% of annotated images. Then the same models were trained, via Transfer Learning, adopting an Active Semi- upervised Learning to unlabelled animal-wound images. The combination of the two training strategies proved their effectiveness in generating large amounts of annotated samples (94% of Deepskin, 80% of PetWound) with the minimal human intervention. The correctness of automated segmentation were evaluated by clinical experts at each round of training thus we can assert that the results obtained in this thesis stands as a reliable solution to perform a correct wound image segmentation. The use of Transfer Learning and Active Semi-Supervied Learning allows to minimize labelling effort from clinicians, even requiring no starting manual annotation at all. Moreover the performances of the model with limited number of parameters suggest the implementation of smartphone-based application to this topic, helping the future standardization of light-reflected images as acknowledge medical images.
Resumo:
This thesis develops AI methods as a contribution to computational musicology, an interdisciplinary field that studies music with computers. In systematic musicology a composition is defined as the combination of harmony, melody and rhythm. According to de La Borde, harmony alone "merits the name of composition". This thesis focuses on analysing the harmony from a computational perspective. We concentrate on symbolic music representation and address the problem of formally representing chord progressions in western music compositions. Informally, chords are sets of pitches played simultaneously, and chord progressions constitute the harmony of a composition. Our approach combines ML techniques with knowledge-based techniques. We design and implement the Modal Harmony ontology (MHO), using OWL. It formalises one of the most important theories in western music: the Modal Harmony Theory. We propose and experiment with different types of embedding methods to encode chords, inspired by NLP and adapted to the music domain, using both statistical (extensional) knowledge by relying on a huge dataset of chord annotations (ChoCo), intensional knowledge by relying on MHO and a combination of the two. The methods are evaluated on two musicologically relevant tasks: chord classification and music structure segmentation. The former is verified by comparing the results of the Odd One Out algorithm to the classification obtained with MHO. Good performances (accuracy: 0.86) are achieved. We feed a RNN for the latter, using our embeddings. Results show that the best performance (F1: 0.6) is achieved with embeddings that combine both approaches. Our method outpeforms the state of the art (F1 = 0.42) for symbolic music structure segmentation. It is worth noticing that embeddings based only on MHO almost equal the best performance (F1 = 0.58). We remark that those embeddings only require the ontology as an input as opposed to other approaches that rely on large datasets.