986 resultados para sistemi integrati, CAT tools, machine translation
Resumo:
In questo elaborato vengono analizzate differenti tecniche per la detection di jammer attivi e costanti in una comunicazione satellitare in uplink. Osservando un numero limitato di campioni ricevuti si vuole identificare la presenza di un jammer. A tal fine sono stati implementati i seguenti classificatori binari: support vector machine (SVM), multilayer perceptron (MLP), spectrum guarding e autoencoder. Questi algoritmi di apprendimento automatico dipendono dalle features che ricevono in ingresso, per questo motivo è stata posta particolare attenzione alla loro scelta. A tal fine, sono state confrontate le accuratezze ottenute dai detector addestrati utilizzando differenti tipologie di informazione come: i segnali grezzi nel tempo, le statistical features, le trasformate wavelet e lo spettro ciclico. I pattern prodotti dall’estrazione di queste features dai segnali satellitari possono avere dimensioni elevate, quindi, prima della detection, vengono utilizzati i seguenti algoritmi per la riduzione della dimensionalità: principal component analysis (PCA) e linear discriminant analysis (LDA). Lo scopo di tale processo non è quello di eliminare le features meno rilevanti, ma combinarle in modo da preservare al massimo l’informazione, evitando problemi di overfitting e underfitting. Le simulazioni numeriche effettuate hanno evidenziato come lo spettro ciclico sia in grado di fornire le features migliori per la detection producendo però pattern di dimensioni elevate, per questo motivo è stato necessario l’utilizzo di algoritmi di riduzione della dimensionalità. In particolare, l'algoritmo PCA è stato in grado di estrarre delle informazioni migliori rispetto a LDA, le cui accuratezze risentivano troppo del tipo di jammer utilizzato nella fase di addestramento. Infine, l’algoritmo che ha fornito le prestazioni migliori è stato il Multilayer Perceptron che ha richiesto tempi di addestramento contenuti e dei valori di accuratezza elevati.
Resumo:
The following thesis aims to investigate the issues concerning the maintenance of a Machine Learning model over time, both about the versioning of the model itself and the data on which it is trained and about data monitoring tools and their distribution. The themes of Data Drift and Concept Drift were then explored and the performance of some of the most popular techniques in the field of Anomaly detection, such as VAE, PCA, and Monte Carlo Dropout, were evaluated.
Resumo:
Il presente elaborato analizza le rese del sistema di interpretazione automatica WT2 Plus di Timekettle in un reale contesto didattico. Nello specifico, sono state condotte tre sperimentazioni presso l’Accademia Europea di Manga utilizzando due modalità di interpretazione supportate dal dispositivo. L’obiettivo è valutare la qualità di un sistema di interpretazione automatica in un contesto reale, dato che allo stato attuale mancano studi che valutino la performance di questi dispositivi nei reali contesti per i quali sono stati sviluppati. Il primo capitolo ripercorre la storia dell’interpretazione automatica, cui fa seguito la spiegazione della tecnologia alla base dei sistemi disponibili sul mercato, di cui poi si presenta lo stato dell’arte. Successivamente si delinea in breve il prossimo passaggio evolutivo nell’interpretazione automatica e si analizzano tre casi studio simili a quello proposto. Il capitolo si conclude con l’approccio scelto per valutare la performance del dispositivo. Il secondo capitolo, dedicato alla metodologia, si apre con una panoramica su WT2 Plus e sull’azienda produttrice per poi descrivere il metodo usato nelle sperimentazioni. Nello specifico, le sperimentazioni sono state condotte durante alcune delle attività previste nel normale svolgimento delle lezioni presso l’Accademia Europea di Manga con il coinvolgimento di uno studente anglofono e di due membri del personale accademico. I dati raccolti sono analizzati e discussi rispettivamente nei capitoli 3 e 4. I risultati ottenuti sembrano suggerire che il dispositivo non sia, allo stato attuale, compatibile con un reale contesto didattico, per via del quasi mancato funzionamento di una delle modalità prescelte, per il servizio di interpretazione, che risulta ancora troppo letterale, la monotonia dell’intonazione della voce automatica e infine per la traduzione quasi completamente errata della terminologia tecnica.
Resumo:
Nella sede dell’azienda ospitante Alexide, si è ravvisata la mancanza di un sistema di controllo automatico da remoto dell’intero impianto di climatizzazione HVAC (Heating, Ventilation and Air Conditioning) utilizzato, e la soluzione migliore è risultata quella di attuare un processo di trasformazione della struttura in uno smart building. Ho quindi eseguito questa procedura di trasformazione digitale progettando e sviluppando un sistema distribuito in grado di gestire una serie di dati provenienti in tempo reale da sensori ambientali. L’architettura del sistema progettato è stata sviluppata in C# su ambiente dotNET, dove sono stati collezionati i dati necessari per il funzionamento del modello di predizione. Nella fattispecie sono stati utilizzati i dati provenienti dall’HVAC, da un sensore di temperatura interna dell'edificio e dal fotovoltaico installato nella struttura. La comunicazione tra il sistema distribuito e l’entità dell’HVAC avviene mediante il canale di comunicazione ModBus, mentre per quanto riguarda i dati della temperatura interna e del fotovoltaico questi vengono collezionati da sensori che inviano le informazioni sfruttando un canale di comunicazione che utilizza il protocollo MQTT, e lo stesso viene utilizzato come principale metodo di comunicazione all’interno del sistema, appoggiandosi ad un broker di messaggistica con modello publish/subscribe. L'automatizzazione del sistema è dovuta anche all'utilizzo di un modello di predizione con lo scopo di predire in maniera quanto più accurata possibile la temperatura interna all'edificio delle ore future. Per quanto riguarda il modello di predizione da me implementato e integrato nel sistema la scelta è stata quella di ispirarmi ad un modello ideato da Google nel 2014 ovvero il Sequence to Sequence. Il modello sviluppato si struttura come un encoder-decoder che utilizza le RNN, in particolare le reti LSTM.
Resumo:
High-throughput screening of physical, genetic and chemical-genetic interactions brings important perspectives in the Systems Biology field, as the analysis of these interactions provides new insights into protein/gene function, cellular metabolic variations and the validation of therapeutic targets and drug design. However, such analysis depends on a pipeline connecting different tools that can automatically integrate data from diverse sources and result in a more comprehensive dataset that can be properly interpreted. We describe here the Integrated Interactome System (IIS), an integrative platform with a web-based interface for the annotation, analysis and visualization of the interaction profiles of proteins/genes, metabolites and drugs of interest. IIS works in four connected modules: (i) Submission module, which receives raw data derived from Sanger sequencing (e.g. two-hybrid system); (ii) Search module, which enables the user to search for the processed reads to be assembled into contigs/singlets, or for lists of proteins/genes, metabolites and drugs of interest, and add them to the project; (iii) Annotation module, which assigns annotations from several databases for the contigs/singlets or lists of proteins/genes, generating tables with automatic annotation that can be manually curated; and (iv) Interactome module, which maps the contigs/singlets or the uploaded lists to entries in our integrated database, building networks that gather novel identified interactions, protein and metabolite expression/concentration levels, subcellular localization and computed topological metrics, GO biological processes and KEGG pathways enrichment. This module generates a XGMML file that can be imported into Cytoscape or be visualized directly on the web. We have developed IIS by the integration of diverse databases following the need of appropriate tools for a systematic analysis of physical, genetic and chemical-genetic interactions. IIS was validated with yeast two-hybrid, proteomics and metabolomics datasets, but it is also extendable to other datasets. IIS is freely available online at: http://www.lge.ibi.unicamp.br/lnbio/IIS/.
Resumo:
This chapter provides a short review of quantum dots (QDs) physics, applications, and perspectives. The main advantage of QDs over bulk semiconductors is the fact that the size became a control parameter to tailor the optical properties of new materials. Size changes the confinement energy which alters the optical properties of the material, such as absorption, refractive index, and emission bands. Therefore, by using QDs one can make several kinds of optical devices. One of these devices transforms electrons into photons to apply them as active optical components in illumination and displays. Other devices enable the transformation of photons into electrons to produce QDs solar cells or photodetectors. At the biomedical interface, the application of QDs, which is the most important aspect in this book, is based on fluorescence, which essentially transforms photons into photons of different wavelengths. This chapter introduces important parameters for QDs' biophotonic applications such as photostability, excitation and emission profiles, and quantum efficiency. We also present the perspectives for the use of QDs in fluorescence lifetime imaging (FLIM) and Förster resonance energy transfer (FRET), so useful in modern microscopy, and how to take advantage of the usually unwanted blinking effect to perform super-resolution microscopy.
Resumo:
This article aimed at comparing the accuracy of linear measurement tools of different commercial software packages. Eight fully edentulous dry mandibles were selected for this study. Incisor, canine, premolar, first molar and second molar regions were selected. Cone beam computed tomography (CBCT) images were obtained with i-CAT Next Generation. Linear bone measurements were performed by one observer on the cross-sectional images using three different software packages: XoranCat®, OnDemand3D® and KDIS3D®, all able to assess DICOM images. In addition, 25% of the sample was reevaluated for the purpose of reproducibility. The mandibles were sectioned to obtain the gold standard for each region. Intraclass coefficients (ICC) were calculated to examine the agreement between the two periods of evaluation; the one-way analysis of variance performed with the post-hoc Dunnett test was used to compare each of the software-derived measurements with the gold standard. The ICC values were excellent for all software packages. The least difference between the software-derived measurements and the gold standard was obtained with the OnDemand3D and KDIS3D (-0.11 and -0.14 mm, respectively), and the greatest, with the XoranCAT (+0.25 mm). However, there was no statistical significant difference between the measurements obtained with the different software packages and the gold standard (p> 0.05). In conclusion, linear bone measurements were not influenced by the software package used to reconstruct the image from CBCT DICOM data.
Resumo:
The present work aimed to create a methodology to evaluate the pulverization process with the use of quality tools. It was listed the primary factors, secondary factors, tertiary factors and, with the check list tool support, the list was elaborated. It was evaluated the factors labor, agriculture machine, material and method of 32 pulverization process before pesticide application, in that each factor received a punctuation, having as total sum of 750 points. The medium punctuation to the factors labor, agriculture machine, material and method was 78; 211; 49; 20 and 94 points, respectively. The sum of the factors points for the 32 processes, the minimum value found was 230 and maximum was 620 points. With the proposed methodology, can be identify which common causes of the processes can affect its result.
Resumo:
PURPOSE: To evaluate the sensitivity and specificity of machine learning classifiers (MLCs) for glaucoma diagnosis using Spectral Domain OCT (SD-OCT) and standard automated perimetry (SAP). METHODS: Observational cross-sectional study. Sixty two glaucoma patients and 48 healthy individuals were included. All patients underwent a complete ophthalmologic examination, achromatic standard automated perimetry (SAP) and retinal nerve fiber layer (RNFL) imaging with SD-OCT (Cirrus HD-OCT; Carl Zeiss Meditec Inc., Dublin, California). Receiver operating characteristic (ROC) curves were obtained for all SD-OCT parameters and global indices of SAP. Subsequently, the following MLCs were tested using parameters from the SD-OCT and SAP: Bagging (BAG), Naive-Bayes (NB), Multilayer Perceptron (MLP), Radial Basis Function (RBF), Random Forest (RAN), Ensemble Selection (ENS), Classification Tree (CTREE), Ada Boost M1(ADA),Support Vector Machine Linear (SVML) and Support Vector Machine Gaussian (SVMG). Areas under the receiver operating characteristic curves (aROC) obtained for isolated SAP and OCT parameters were compared with MLCs using OCT+SAP data. RESULTS: Combining OCT and SAP data, MLCs' aROCs varied from 0.777(CTREE) to 0.946 (RAN).The best OCT+SAP aROC obtained with RAN (0.946) was significantly larger the best single OCT parameter (p<0.05), but was not significantly different from the aROC obtained with the best single SAP parameter (p=0.19). CONCLUSION: Machine learning classifiers trained on OCT and SAP data can successfully discriminate between healthy and glaucomatous eyes. The combination of OCT and SAP measurements improved the diagnostic accuracy compared with OCT data alone.
Resumo:
Universidade Estadual de Campinas . Faculdade de Educação Física
Resumo:
Visceral leishmaniasis (VL) is a widely spread zoonotic disease. In Brazil the disease is caused by Leishmania (Leishmania) infantum chagasi. Peridomestic sandflies acquire the etiological agent by feeding on blood of infected reservoir animals, such as dogs or wildlife. The disease is endemic in Brazil and epidemic foci have been reported in densely populated cities all over the country. Many clinical features of Leishmania infection are related to the host-parasite relationship, and many candidate virulence factors in parasites that cause VL have been studied such as A2 genes. The A2 gene was first isolated in 1994 and then in 2005 three new alleles were described in Leishmania (Leishmania) infantum. In the present study we amplified by polymerase chain reaction (PCR) and sequenced the A2 gene from the genome of a clonal population of L. (L.) infantum chagasi VL parasites. The L. (L.) infantum chagasi A2 gene was amplified, cloned, and sequenced in. The amplified fragment showed approximately 90% similarity with another A2 allele amplified in Leishmania (Leishmania) donovani and in L.(L.) infantum described in literature. However, nucleotide translation shows differences in protein amino acid sequence, which may be essential to determine the variability of A2 genes in the species of the L. (L.) donovani complex and represents an additional tool to help understanding the role this gene family may have in establishing virulence and immunity in visceral leishmaniasis. This knowledge is important for the development of more accurate diagnostic tests and effective tools for disease control.
Resumo:
Due to the imprecise nature of biological experiments, biological data is often characterized by the presence of redundant and noisy data. This may be due to errors that occurred during data collection, such as contaminations in laboratorial samples. It is the case of gene expression data, where the equipments and tools currently used frequently produce noisy biological data. Machine Learning algorithms have been successfully used in gene expression data analysis. Although many Machine Learning algorithms can deal with noise, detecting and removing noisy instances from the training data set can help the induction of the target hypothesis. This paper evaluates the use of distance-based pre-processing techniques for noise detection in gene expression data classification problems. This evaluation analyzes the effectiveness of the techniques investigated in removing noisy data, measured by the accuracy obtained by different Machine Learning classifiers over the pre-processed data.
Resumo:
This paper proposes an architecture for machining process and production monitoring to be applied in machine tools with open Computer numerical control (CNC). A brief description of the advantages of using open CNC for machining process and production monitoring is presented with an emphasis on the CNC architecture using a personal computer (PC)-based human-machine interface. The proposed architecture uses the CNC data and sensors to gather information about the machining process and production. It allows the development of different levels of monitoring systems with mininium investment, minimum need for sensor installation, and low intrusiveness to the process. Successful examples of the utilization of this architecture in a laboratory environment are briefly described. As a Conclusion, it is shown that a wide range of monitoring solutions can be implemented in production processes using the proposed architecture.
Resumo:
This work proposes a new approach using a committee machine of artificial neural networks to classify masses found in mammograms as benign or malignant. Three shape factors, three edge-sharpness measures, and 14 texture measures are used for the classification of 20 regions of interest (ROIs) related to malignant tumors and 37 ROIs related to benign masses. A group of multilayer perceptrons (MLPs) is employed as a committee machine of neural network classifiers. The classification results are reached by combining the responses of the individual classifiers. Experiments involving changes in the learning algorithm of the committee machine are conducted. The classification accuracy is evaluated using the area A. under the receiver operating characteristics (ROC) curve. The A, result for the committee machine is compared with the A, results obtained using MLPs and single-layer perceptrons (SLPs), as well as a linear discriminant analysis (LDA) classifier Tests are carried out using the student's t-distribution. The committee machine classifier outperforms the MLP SLP, and LDA classifiers in the following cases: with the shape measure of spiculation index, the A, values of the four methods are, in order 0.93, 0.84, 0.75, and 0.76; and with the edge-sharpness measure of acutance, the values are 0.79, 0.70, 0.69, and 0.74. Although the features with which improvement is obtained with the committee machines are not the same as those that provided the maximal value of A(z) (A(z) = 0.99 with some shape features, with or without the committee machine), they correspond to features that are not critically dependent on the accuracy of the boundaries of the masses, which is an important result. (c) 2008 SPIE and IS&T.
Resumo:
Balance problems in hemiparetic patients after stroke can be caused by different impairments in the physiological systems involved in Postural control, including sensory afferents, movement strategies, biomechanical constraints, cognitive processing, and perception of verticality. Balance impairments and disabilities must be appropriately addressed. This article reviews the most common balance abnormalities in hemiparetic patients with stroke and the main tools used to diagnose them.