936 resultados para Data Acquisition Methods.


Relevância:

90.00% 90.00%

Publicador:

Resumo:

In this work, mixed oxides were synthesized by two methods: polymeric precursor and gel-combustion. The oxides, Niquelate of Lanthanum, Cobaltate of Lanthanum and Cuprate of Lanthanum were synthesized by the polymeric precursor method, and treated at 300 º C for 2 hours, calcined at 800 º C for 6h in air atmosphere. In gel-combustion method were produced and oxides using urea and citric acid as fuel, forming for each fuel the following oxides Ferrate of Lanthanum, Cobaltato of Lanthanum and Ferrato of Cobalt and Lanthanum, which were submitted to the combustion process assisted by microwave power maximum of 10min. The samples were characterized by: thermogravimetric analysis, X-ray diffraction; fisisorção of N2 (BET method) and scanning electron microscopy. The reactions catalytic of depolymerization of poly (methyl methacrylate), were performed in a reactor of silica, with catalytic and heating system equipped with a data acquisition system and the gas chromatograph. For the catalysts synthesized using the polymeric precursor method, the cuprate of lanthanum was best for the depolymerization of the recycled polymer, obtaining 100% conversion in less time 554 (min), and the pure polymer, was the Niquelate of Lanthanum, with 100% conversion in less time 314 (min). By gel-combustion method using urea as fuel which was the best result obtained Ferrate of Lanthanum for the pure polymer with 100% conversion in less time 657 (min), and the recycled polymer was Cobaltate of Lanthanum with 100 % conversion in less time 779 (min). And using citric acid to obtain the best result for the pure polymer, was Ferrate of Lanthanum with 100% conversion in less time 821 (min and) for the recycled polymer, was Ferrate of Lanthanum with 98.28% conversion in less time 635 (min)

Relevância:

90.00% 90.00%

Publicador:

Resumo:

When it comes to information sets in real life, often pieces of the whole set may not be available. This problem can find its origin in various reasons, describing therefore different patterns. In the literature, this problem is known as Missing Data. This issue can be fixed in various ways, from not taking into consideration incomplete observations, to guessing what those values originally were, or just ignoring the fact that some values are missing. The methods used to estimate missing data are called Imputation Methods. The work presented in this thesis has two main goals. The first one is to determine whether any kind of interactions exists between Missing Data, Imputation Methods and Supervised Classification algorithms, when they are applied together. For this first problem we consider a scenario in which the databases used are discrete, understanding discrete as that it is assumed that there is no relation between observations. These datasets underwent processes involving different combina- tions of the three components mentioned. The outcome showed that the missing data pattern strongly influences the outcome produced by a classifier. Also, in some of the cases, the complex imputation techniques investigated in the thesis were able to obtain better results than simple ones. The second goal of this work is to propose a new imputation strategy, but this time we constrain the specifications of the previous problem to a special kind of datasets, the multivariate Time Series. We designed new imputation techniques for this particular domain, and combined them with some of the contrasted strategies tested in the pre- vious chapter of this thesis. The time series also were subjected to processes involving missing data and imputation to finally propose an overall better imputation method. In the final chapter of this work, a real-world example is presented, describing a wa- ter quality prediction problem. The databases that characterized this problem had their own original latent values, which provides a real-world benchmark to test the algorithms developed in this thesis.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

La ricerca si pone l’obiettivo di analizzare strumenti e metodi per l’applicazione dell’H-BIM comprendendone le criticità e fornendo soluzioni utili in questo campo. Al contempo la finalità non è circoscrivibile alla semplice produzione di modelli 3D semanticamente strutturati e parametrici a partire da una nuvola di punti ottenuta con un rilievo digitale, ma si propone di definire i criteri e le metodiche di applicazione delle H-BIM all’interno dell’intero processo. L’impostazione metodologica scelta prevede un processo che parte dalla conoscenza dello stato dell’arte in tema di H-BIM con lo studio dell’attuale normativa in materia e i casi studio di maggior rilevanza. Si è condotta una revisione critica completa della letteratura in merito alla tecnologia BIM e H-BIM, analizzando esperienze di utilizzo della tecnologia BIM nel settore edile globale. Inoltre, al fine di promuovere soluzioni intelligenti all’interno del Facility Management è stato necessario analizzare le criticità presenti nelle procedure, rivedere i processi e i metodi per raccogliere e gestire i dati, nonché individuare le procedure adeguate per garantire il successo dell’implementazione. Sono state evidenziate le potenzialità procedurali e operative legate all’uso sistematico delle innovazioni digitali nell’ottica del Facility Management, oltre che allo studio degli strumenti di acquisizione ed elaborazione dei dati e di post-produzione. Si è proceduto al testing su casi specifici per l’analisi della fase di Scan-to-BIM, differenziati per tipologia di utilizzo, data di costruzione, proprietà e localizzazione. Il percorso seguito ha permesso di porre in luce il significato e le implicazioni dell’utilizzo del BIM nell’ambito del Facility Management, sulla base di una differenziazione delle applicazioni del modello BIM al variare delle condizioni in essere. Infine, sono state definite le conclusioni e formulate raccomandazioni riguardo al futuro utilizzo della tecnologia H-BIM nel settore delle costruzioni. In particolare, definendo l’emergente frontiera del Digital Twin, quale veicolo necessario nel futuro della Costruzione 4.0.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

In the last decade, manufacturing companies have been facing two significant challenges. First, digitalization imposes adopting Industry 4.0 technologies and allows creating smart, connected, self-aware, and self-predictive factories. Second, the attention on sustainability imposes to evaluate and reduce the impact of the implemented solutions from economic and social points of view. In manufacturing companies, the maintenance of physical assets assumes a critical role. Increasing the reliability and the availability of production systems leads to the minimization of systems’ downtimes; In addition, the proper system functioning avoids production wastes and potentially catastrophic accidents. Digitalization and new ICT technologies have assumed a relevant role in maintenance strategies. They allow assessing the health condition of machinery at any point in time. Moreover, they allow predicting the future behavior of machinery so that maintenance interventions can be planned, and the useful life of components can be exploited until the time instant before their fault. This dissertation provides insights on Predictive Maintenance goals and tools in Industry 4.0 and proposes a novel data acquisition, processing, sharing, and storage framework that addresses typical issues machine producers and users encounter. The research elaborates on two research questions that narrow down the potential approaches to data acquisition, processing, and analysis for fault diagnostics in evolving environments. The research activity is developed according to a research framework, where the research questions are addressed by research levers that are explored according to research topics. Each topic requires a specific set of methods and approaches; however, the overarching methodological approach presented in this dissertation includes three fundamental aspects: the maximization of the quality level of input data, the use of Machine Learning methods for data analysis, and the use of case studies deriving from both controlled environments (laboratory) and real-world instances.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Nello sport di alto livello l’uso della tecnologia ha raggiunto un ruolo di notevole importanza per l’analisi e la valutazione della prestazione. Negli ultimi anni sono emerse nuove tecnologie e sono migliorate quelle pre-esistenti (i.e. accelerometri, giroscopi e software per l’analisi video) in termini di campionamento, acquisizione dati, dimensione dei sensori che ha permesso la loro “indossabilità” e l’inserimento degli stessi all’interno degli attrezzi sportivi. La tecnologia è sempre stata al servizio degli atleti come strumento di supporto per raggiungere l’apice dei risultati sportivi. Per questo motivo la valutazione funzionale dell’atleta associata all’uso di tecnologie si pone lo scopo di valutare i miglioramenti degli atleti misurando la condizione fisica e/o la competenza tecnica di una determinata disciplina sportiva. L’obiettivo di questa tesi è studiare l’utilizzo delle applicazioni tecnologiche e individuare nuovi metodi di valutazione della performance in alcuni sport acquatici. La prima parte (capitoli 1-5), si concentra sulla tecnologia prototipale chiamata E-kayak e le varie applicazioni nel kayak di velocità. In questi lavori è stata verificata l’attendibilità dei dati forniti dal sistema E-kayak con i sistemi presenti in letteratura. Inoltre, sono stati indagati nuovi parametri utili a comprendere il modello di prestazione del paddler. La seconda parte (capitolo 6), si riferisce all’analisi cinematica della spinta verticale del pallanuotista, attraverso l’utilizzo della video analisi 2D, per l’individuazione delle relazioni Forza-velocità e Potenza-velocità direttamente in acqua. Questo studio pilota, potrà fornire indicazioni utili al monitoraggio e condizionamento di forza e potenza da svolgere direttamente in acqua. Infine la terza parte (capitoli 7-8), si focalizza sull’individuazione della sequenza di Fibonacci (sequenza divina) nel nuoto a stile libero e a farfalla. I risultati di questi studi suggeriscono che il ritmo di nuotata tenuto durante le medie/lunghe distanze gioca un ruolo chiave. Inoltre, il livello di autosomiglianza (self-similarity) aumenta con la tecnica del nuoto.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

In the framework of industrial problems, the application of Constrained Optimization is known to have overall very good modeling capability and performance and stands as one of the most powerful, explored, and exploited tool to address prescriptive tasks. The number of applications is huge, ranging from logistics to transportation, packing, production, telecommunication, scheduling, and much more. The main reason behind this success is to be found in the remarkable effort put in the last decades by the OR community to develop realistic models and devise exact or approximate methods to solve the largest variety of constrained or combinatorial optimization problems, together with the spread of computational power and easily accessible OR software and resources. On the other hand, the technological advancements lead to a data wealth never seen before and increasingly push towards methods able to extract useful knowledge from them; among the data-driven methods, Machine Learning techniques appear to be one of the most promising, thanks to its successes in domains like Image Recognition, Natural Language Processes and playing games, but also the amount of research involved. The purpose of the present research is to study how Machine Learning and Constrained Optimization can be used together to achieve systems able to leverage the strengths of both methods: this would open the way to exploiting decades of research on resolution techniques for COPs and constructing models able to adapt and learn from available data. In the first part of this work, we survey the existing techniques and classify them according to the type, method, or scope of the integration; subsequently, we introduce a novel and general algorithm devised to inject knowledge into learning models through constraints, Moving Target. In the last part of the thesis, two applications stemming from real-world projects and done in collaboration with Optit will be presented.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The coastal ocean is a complex environment with extremely dynamic processes that require a high-resolution and cross-scale modeling approach in which all hydrodynamic fields and scales are considered integral parts of the overall system. In the last decade, unstructured-grid models have been used to advance in seamless modeling between scales. On the other hand, the data assimilation methodologies to improve the unstructured-grid models in the coastal seas have been developed only recently and need significant advancements. Here, we link the unstructured-grid ocean modeling to the variational data assimilation methods. In particular, we show results from the modeling system SANIFS based on SHYFEM fully-baroclinic unstructured-grid model interfaced with OceanVar, a state-of-art variational data assimilation scheme adopted for several systems based on a structured grid. OceanVar implements a 3DVar DA scheme. The combination of three linear operators models the background error covariance matrix. The vertical part is represented using multivariate EOFs for temperature, salinity, and sea level anomaly. The horizontal part is assumed to be Gaussian isotropic and is modeled using a first-order recursive filter algorithm designed for structured and regular grids. Here we introduced a novel recursive filter algorithm for unstructured grids. A local hydrostatic adjustment scheme models the rapidly evolving part of the background error covariance. We designed two data assimilation experiments using SANIFS implementation interfaced with OceanVar over the period 2017-2018, one with only temperature and salinity assimilation by Argo profiles and the second also including sea level anomaly. The results showed a successful implementation of the approach and the added value of the assimilation for the active tracer fields. While looking at the broad basin, no significant improvements are highlighted for the sea level, requiring future investigations. Furthermore, a Machine Learning methodology based on an LSTM network has been used to predict the model SST increments.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Vision systems are powerful tools playing an increasingly important role in modern industry, to detect errors and maintain product standards. With the enlarged availability of affordable industrial cameras, computer vision algorithms have been increasingly applied in industrial manufacturing processes monitoring. Until a few years ago, industrial computer vision applications relied only on ad-hoc algorithms designed for the specific object and acquisition setup being monitored, with a strong focus on co-designing the acquisition and processing pipeline. Deep learning has overcome these limits providing greater flexibility and faster re-configuration. In this work, the process to be inspected consists in vials’ pack formation entering a freeze-dryer, which is a common scenario in pharmaceutical active ingredient packaging lines. To ensure that the machine produces proper packs, a vision system is installed at the entrance of the freeze-dryer to detect eventual anomalies with execution times compatible with the production specifications. Other constraints come from sterility and safety standards required in pharmaceutical manufacturing. This work presents an overview about the production line, with particular focus on the vision system designed, and about all trials conducted to obtain the final performance. Transfer learning, alleviating the requirement for a large number of training data, combined with data augmentation methods, consisting in the generation of synthetic images, were used to effectively increase the performances while reducing the cost of data acquisition and annotation. The proposed vision algorithm is composed by two main subtasks, designed respectively to vials counting and discrepancy detection. The first one was trained on more than 23k vials (about 300 images) and tested on 5k more (about 75 images), whereas 60 training images and 52 testing images were used for the second one.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

I recenti sviluppi nel campo dell’intelligenza artificiale hanno permesso una più adeguata classificazione del segnale EEG. Negli ultimi anni è stato dimostrato come sia possibile ottenere ottime performance di classificazione impiegando tecniche di Machine Learning (ML) e di Deep Learning (DL), facendo uso, per quest’ultime, di reti neurali convoluzionali (Convolutional Neural Networks, CNN). In particolare, il Deep Learning richiede molti dati di training mentre spesso i dataset per EEG sono limitati ed è difficile quindi raggiungere prestazioni elevate. I metodi di Data Augmentation possono alleviare questo problema. Partendo da dati reali, questa tecnica permette, la creazione di dati artificiali fondamentali per aumentare le dimensioni del dataset di partenza. L’applicazione più comune è quella di utilizzare i Data Augmentation per aumentare le dimensioni del training set, in modo da addestrare il modello/rete neurale su un numero di campioni più esteso, riducendo gli errori di classificazione. Partendo da questa idea, i Data Augmentation sono stati applicati in molteplici campi e in particolare per la classificazione del segnale EEG. In questo elaborato di tesi, inizialmente, vengono descritti metodi di Data Augmentation implementati nel corso degli anni, utilizzabili anche nell’ambito di applicazioni EEG. Successivamente, si presentano alcuni studi specifici che applicano metodi di Data Augmentation per migliorare le presentazioni di classificatori basati su EEG per l’identificazione dello stato sonno/veglia, per il riconoscimento delle emozioni, e per la classificazione di immaginazione motoria.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Privacy issues and data scarcity in PET field call for efficient methods to expand datasets via synthetic generation of new data that cannot be traced back to real patients and that are also realistic. In this thesis, machine learning techniques were applied to 1001 amyloid-beta PET images, which had undergone a diagnosis of Alzheimer’s disease: the evaluations were 540 positive, 457 negative and 4 unknown. Isomap algorithm was used as a manifold learning method to reduce the dimensions of the PET dataset; a numerical scale-free interpolation method was applied to invert the dimensionality reduction map. The interpolant was tested on the PET images via LOOCV, where the removed images were compared with the reconstructed ones with the mean SSIM index (MSSIM = 0.76 ± 0.06). The effectiveness of this measure is questioned, since it indicated slightly higher performance for a method of comparison using PCA (MSSIM = 0.79 ± 0.06), which gave clearly poor quality reconstructed images with respect to those recovered by the numerical inverse mapping. Ten synthetic PET images were generated and, after having been mixed with ten originals, were sent to a team of clinicians for the visual assessment of their realism; no significant agreements were found either between clinicians and the true image labels or among the clinicians, meaning that original and synthetic images were indistinguishable. The future perspective of this thesis points to the improvement of the amyloid-beta PET research field by increasing available data, overcoming the constraints of data acquisition and privacy issues. Potential improvements can be achieved via refinements of the manifold learning and the inverse mapping stages during the PET image analysis, by exploring different combinations in the choice of algorithm parameters and by applying other non-linear dimensionality reduction algorithms. A final prospect of this work is the search for new methods to assess image reconstruction quality.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Conventional reflectance spectroscopy (NIRS) and hyperspectral imaging (HI) in the near-infrared region (1000-2500 nm) are evaluated and compared, using, as the case study, the determination of relevant properties related to the quality of natural rubber. Mooney viscosity (MV) and plasticity indices (PI) (PI0 - original plasticity, PI30 - plasticity after accelerated aging, and PRI - the plasticity retention index after accelerated aging) of rubber were determined using multivariate regression models. Two hundred and eighty six samples of rubber were measured using conventional and hyperspectral near-infrared imaging reflectance instruments in the range of 1000-2500 nm. The sample set was split into regression (n = 191) and external validation (n = 95) sub-sets. Three instruments were employed for data acquisition: a line scanning hyperspectral camera and two conventional FT-NIR spectrometers. Sample heterogeneity was evaluated using hyperspectral images obtained with a resolution of 150 × 150 μm and principal component analysis. The probed sample area (5 cm(2); 24,000 pixels) to achieve representativeness was found to be equivalent to the average of 6 spectra for a 1 cm diameter probing circular window of one FT-NIR instrument. The other spectrophotometer can probe the whole sample in only one measurement. The results show that the rubber properties can be determined with very similar accuracy and precision by Partial Least Square (PLS) regression models regardless of whether HI-NIR or conventional FT-NIR produce the spectral datasets. The best Root Mean Square Errors of Prediction (RMSEPs) of external validation for MV, PI0, PI30, and PRI were 4.3, 1.8, 3.4, and 5.3%, respectively. Though the quantitative results provided by the three instruments can be considered equivalent, the hyperspectral imaging instrument presents a number of advantages, being about 6 times faster than conventional bulk spectrometers, producing robust spectral data by ensuring sample representativeness, and minimizing the effect of the presence of contaminants.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Remotely sensed imagery has been widely used for land use/cover classification thanks to the periodic data acquisition and the widespread use of digital image processing systems offering a wide range of classification algorithms. The aim of this work was to evaluate some of the most commonly used supervised and unsupervised classification algorithms under different landscape patterns found in Rondônia, including (1) areas of mid-size farms, (2) fish-bone settlements and (3) a gradient of forest and Cerrado (Brazilian savannah). Comparison with a reference map based on the kappa statistics resulted in good to superior indicators (best results - K-means: k=0.68; k=0.77; k=0.64 and MaxVer: k=0.71; k=0.89; k=0.70 respectively for three areas mentioned). Results show that choosing a specific algorithm requires to take into account both its capacity to discriminate among various spectral signatures under different landscape patterns as well as a cost/benefit analysis considering the different steps performed by the operator performing a land cover/use map. it is suggested that a more systematic assessment of several options of implementation of a specific project is needed prior to beginning a land use/cover mapping job.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Universidade Estadual de Campinas . Faculdade de Educação Física

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Universidade Estadual de Campinas . Faculdade de Educação Física