7 resultados para labels
em AMS Tesi di Laurea - Alm@DL - Università di Bologna
Resumo:
In this work, a colorimetric indicator for food oxidation based on the detection of hexanal in gas-phase, has been developed. In fact, in recent years, the food packaging industry has evolved towards new generation of packaging, like active and intelligent. According to literature (Pangloli P. et al. 2002), hexanal is the main product of a fatty acid oxidation: the linoleic acid. So, it was chosen to analyse two kinds of potato chips, fried in two different oils with high concentration of linoleic acid: olive oil and sunflower oil. Five different formulas were prepared and their colour change when exposed to hexanal in gas phase was evaluated. The formulas evaluations were first conducted on filter paper labels. The next step was to select the thickener to add to the formula, in order to coat a polypropylene film, more appropriate than the filter paper for a production at industrial scale. Three kinds of thickeners were tested: a cellulose derivative, an ethylene vinyl-alcohol and a polyvinyl alcohol. To obtain the final labels with the autoadhesive layer, the polypropylene film with the selected formula and thickener was coat with a water based adhesive. For both filter paper and polypropylene labels, with and without autoadhesive layer, the detection limit and the detection time were measured. For the selected formula on filter paper labels, the stability was evaluated, when conserved on the dark or on the light, in order to determine the storage time. Both potato chips samples, stocked at the same conditions, were analysed using an optimised Headspace-Solid Phase Microextraction-Gas Chromatography-Mass Spectrometry (HS-SPME-GC-MS) method, in order to determine the concentration of volatilized hexanal. With the aim to establish if the hexanal can be considered as an indicator of the end of potato chips shelf life, sensory evaluation was conducted each day of HS-SPME-GC-MS analysis.
Resumo:
In order to estimate depth through supervised deep learning-based stereo methods, it is necessary to have access to precise ground truth depth data. While the gathering of precise labels is commonly tackled by deploying depth sensors, this is not always a viable solution. For instance, in many applications in the biomedical domain, the choice of sensors capable of sensing depth at small distances with high precision on difficult surfaces (that present non-Lambertian properties) is very limited. It is therefore necessary to find alternative techniques to gather ground truth data without having to rely on external sensors. In this thesis, two different approaches have been tested to produce supervision data for biomedical images. The first aims to obtain input stereo image pairs and disparities through simulation in a virtual environment, while the second relies on a non-learned disparity estimation algorithm in order to produce noisy disparities, which are then filtered by means of hand-crafted confidence measures to create noisy labels for a subset of pixels. Among the two, the second approach, which is referred in literature as proxy-labeling, has shown the best results and has even outperformed the non-learned disparity estimation algorithm used for supervision.
Resumo:
Unmanned Aerial Vehicle (UAVs) equipped with cameras have been fast deployed to a wide range of applications, such as smart cities, agriculture or search and rescue applications. Even though UAV datasets exist, the amount of open and quality UAV datasets is limited. So far, we want to overcome this lack of high quality annotation data by developing a simulation framework for a parametric generation of synthetic data. The framework accepts input via a serializable format. The input specifies which environment preset is used, the objects to be placed in the environment along with their position and orientation as well as additional information such as object color and size. The result is an environment that is able to produce UAV typical data: RGB image from the UAVs camera, altitude, roll, pitch and yawn of the UAV. Beyond the image generation process, we improve the resulting image data photorealism by using Synthetic-To-Real transfer learning methods. Transfer learning focuses on storing knowledge gained while solving one problem and applying it to a different - although related - problem. This approach has been widely researched in other affine fields and results demonstrate it to be an interesing area to investigate. Since simulated images are easy to create and synthetic-to-real translation has shown good quality results, we are able to generate pseudo-realistic images. Furthermore, object labels are inherently given, so we are capable of extending the already existing UAV datasets with realistic quality images and high resolution meta-data. During the development of this thesis we have been able to produce a result of 68.4% on UAVid. This can be considered a new state-of-art result on this dataset.
Resumo:
La classificazione di dati geometrici 3D come point cloud è un tema emergente nell'ambito della visione artificiale in quanto trova applicazione in molteplici contesti di guida autonoma, robotica e realtà aumentata. Sebbene nel mercato siano presenti una grande quantità di sensori in grado di ottenere scansioni reali, la loro annotazione costituisce un collo di bottiglia per la generazione di dataset. Per sopperire al problema si ricorre spesso alla domain adaptation sfruttando dati sintetici annotati. Questo elaborato si pone come obiettivo l'analisi e l'implementazione di metodi di domain adaptation per classificazione di point cloud mediante pseudo-labels. In particolare, sono stati condotti esperimenti all'interno del framework RefRec valutando la possibilità di sostituire nuove architetture di deep learning al modello preesistente. Tra queste, Transformer con mascheramento dell'input ha raggiunto risultati superiori allo stato dell'arte nell'adattamento da dati sintetici a reali (ModelNet->ScanNet) esaminato in questa tesi.
Resumo:
The importance of product presentation in the marketing industry is well known. Labels are crucial for providing information to the buyer, but at a modest additional expense, a beautiful label with exquisite embellishments may also give the goods a sensation of high quality and elegance. Enhancing the capabilities of stamping machines is required to keep up with the increasing velocity of the production lines in the modern manufacturing industry and to offer new opportunities for customization. It’s in this context of improvements and refinements that this work takes place. The thesis was developed during an internship at Studio D, the firm that designs the mechanics of the machines produced by Cartes. The The aim of this work is to study possible upgrades for the existing hot stamping machines. The main focus of this work is centred on two objectives: first, evaluating the pressing forces generated by this machine and characterising how the mat used in the stamping process reacts to such forces. Second, propose a new conformation for the press mechanism in order to improve the rigidity and performance of the machines. The first objective is reached through a combined approach: the mat is crudely characterized with experimental data, while the frame of the machine is studied through FEM analysis. The results obtained are combined and used to upgrade a worksheet that allows to estimate the forces exerted by the machines. The second objective is reached with the proposal of new, improved designs for the main components of the machines.
Resumo:
Privacy issues and data scarcity in PET field call for efficient methods to expand datasets via synthetic generation of new data that cannot be traced back to real patients and that are also realistic. In this thesis, machine learning techniques were applied to 1001 amyloid-beta PET images, which had undergone a diagnosis of Alzheimer’s disease: the evaluations were 540 positive, 457 negative and 4 unknown. Isomap algorithm was used as a manifold learning method to reduce the dimensions of the PET dataset; a numerical scale-free interpolation method was applied to invert the dimensionality reduction map. The interpolant was tested on the PET images via LOOCV, where the removed images were compared with the reconstructed ones with the mean SSIM index (MSSIM = 0.76 ± 0.06). The effectiveness of this measure is questioned, since it indicated slightly higher performance for a method of comparison using PCA (MSSIM = 0.79 ± 0.06), which gave clearly poor quality reconstructed images with respect to those recovered by the numerical inverse mapping. Ten synthetic PET images were generated and, after having been mixed with ten originals, were sent to a team of clinicians for the visual assessment of their realism; no significant agreements were found either between clinicians and the true image labels or among the clinicians, meaning that original and synthetic images were indistinguishable. The future perspective of this thesis points to the improvement of the amyloid-beta PET research field by increasing available data, overcoming the constraints of data acquisition and privacy issues. Potential improvements can be achieved via refinements of the manifold learning and the inverse mapping stages during the PET image analysis, by exploring different combinations in the choice of algorithm parameters and by applying other non-linear dimensionality reduction algorithms. A final prospect of this work is the search for new methods to assess image reconstruction quality.
Resumo:
The ability to create hybrid systems that blend different paradigms has now become a requirement for complex AI systems usually made of more than a component. In this way, it is possible to exploit the advantages of each paradigm and exploit the potential of different approaches such as symbolic and non-symbolic approaches. In particular, symbolic approaches are often exploited for their efficiency, effectiveness and ability to manage large amounts of data, while symbolic approaches are exploited to ensure aspects related to explainability, fairness, and trustworthiness in general. The thesis lies in this context, in particular in the design and development of symbolic technologies that can be easily integrated and interoperable with other AI technologies. 2P-Kt is a symbolic ecosystem developed for this purpose, it provides a logic-programming (LP) engine which can be easily extended and customized to deal with specific needs. The aim of this thesis is to extend 2P-Kt to support constraint logic programming (CLP) as one of the main paradigms for solving highly combinatorial problems given a declarative problem description and a general constraint-propagation engine. A real case study concerning school timetabling is described to show a practical usage of the CLP(FD) library implemented. Since CLP represents only a particular scenario for extending LP to domain-specific scenarios, in this thesis we present also a more general framework: Labelled Prolog, extending LP with labelled terms and in particular labelled variables. The designed framework shows how it is possible to frame all variations and extensions of LP under a single language reducing the huge amount of existing languages and libraries and focusing more on how to manage different domain needs using labels which can be associated with every kind of term. Mapping of CLP into Labeled Prolog is also discussed as well as the benefits of the provided approach.