935 resultados para automatic summarization
Resumo:
The inferior alveolar nerve (IAN) lies within the mandibular canal, named inferior alveolar canal in literature. The detection of this nerve is important during maxillofacial surgeries or for creating dental implants. The poor quality of cone-beam computed tomography (CBCT) and computed tomography (CT) scans and/or bone gaps within the mandible increase the difficulty of this task, posing a challenge to human experts who are going to manually detect it and resulting in a time-consuming task.Therefore this thesis investigates two methods to automatically detect the IAN: a non-data driven technique and a deep-learning method. The latter tracks the IAN position at each frame leveraging detections obtained with the deep neural network CenterNet, fined-tuned for our task, and temporal and spatial information.
Resumo:
Elaborate presents automated guided vehicle state-of-art, describing AGVs' types and employed technologies. AGVs' applications is going to be exposed by means of performed work during Toyota's internship. It will be presented the acquired experience on automatic forklifts' implementation and tools employed in a realization of an AGV system. Morover, it will be presented the development of a python program able to retrieve data, stored in a database, and elaborate them to produce heatmaps on vehicles' errors. The said program has been tested live on customer's sites and obtained result will be explained. Finally, it is going to be presented the analysis on natural navigation technology applied to Toyota's AGVs. Tests on natural navigation have been run in warehouses to estimate capabilities and possible application in logistic field.
Resumo:
The purpose of this thesis is to present the concept of simulation for automatic machines and how it might be used to test and debug software implemented for an automatic machine. The simulation is used to detect errors and allow corrections of the code before the machine has been built. Simulation permits testing different solutions and improving the software to get an optimized one. Additionally, simulation can be used to keep track of a machine after the installation in order to improve the production process during the machine’s life cycle. The central argument of this project is discussing the advantage of using virtual commissioning to test the implemented software in a virtual environment. Such an environment is getting benefit in avoiding potential damages as well as reduction of time to have the machine ready to work. Also, the use of virtual commissioning allows testing different solutions without high losses of time and money. Subsequently, an optimized solution could be found after testing different proposed solutions. The software implemented is based on the Object-Oriented Programming paradigm which implies different features such as encapsulation, modularity, and reusability of the code. Therefore, this way of programming helps to get simplified code that is easier to be understood and debugged as well as its high efficiency. Finally, different communication protocols are implemented in order to allow communication between the real plant and the simulation model. By the outcome that this communication provides, we might be able to gather all the necessary data for the simulation and the analysis, in real-time, of the production process in a way to improve it during the machine life cycle.
Resumo:
L'avanzamento nel campo della long document summarization dipende interamente dalla disponibilità di dataset pubblici di alta qualità e con testi di lunghezza considerevole. Risulta pertanto problematico il fatto che tali dataset risultino spesso solo in lingua inglese, comportandone una limitazione notevole se ci si rivolge a linguaggi le cui risorse sono limitate. A tal scopo, si propone LAWSU-IT, un nuovo dataset giudiziario per long document summarization italiana. LAWSU-IT è il primo dataset italiano di summarization ad avere documenti di grandi dimensioni e a trattare il dominio giudiziario, ed è stato costruito attuando procedure di cleaning dei dati e selezione mirata delle istanze, con lo scopo di ottenere un dataset di long document summarization di alta qualità. Inoltre, sono proposte molteplici baseline sperimentali di natura estrattiva e astrattiva con modelli stato dell'arte e approcci di segmentazione del testo. Si spera che tale risultato possa portare a ulteriori ricerche e sviluppi nell'ambito della long document summarization italiana.
Resumo:
Within the classification of orbits in axisymmetric stellar systems, we present a new algorithm able to automatically classify the orbits according to their nature. The algorithm involves the application of the correlation integral method to the surface of section of the orbit; fitting the cumulative distribution function built with the consequents in the surface of section of the orbit, we can obtain the value of its logarithmic slope m which is directly related to the orbit’s nature: for slopes m ≈ 1 we expect the orbit to be regular, for slopes m ≈ 2 we expect it to be chaotic. With this method we have a fast and reliable way to classify orbits and, furthermore, we provide an analytical expression of the probability that an orbit is regular or chaotic given the logarithmic slope m of its correlation integral. Although this method works statistically well, the underlying algorithm can fail in some cases, misclassifying individual orbits under some peculiar circumstances. The performance of the algorithm benefits from a rich sampling of the traces of the SoS, which can be obtained with long numerical integration of orbits. Finally we note that the algorithm does not differentiate between the subtypes of regular orbits: resonantly trapped and untrapped orbits. Such distinction would be a useful feature, which we leave for future work. Since the result of the analysis is a probability linked to a Gaussian distribution, for the very definition of distribution, some orbits even if they have a certain nature are classified as belonging to the opposite class and create the probabilistic tails of the distribution. So while the method produces fair statistical results, it lacks in absolute classification precision.
Resumo:
In questo elaborato viene trattata l’analisi del problema di soft labeling applicato alla multi-document summarization, in particolare vengono testate varie tecniche per estrarre frasi rilevanti dai documenti presi in dettaglio, al fine di fornire al modello di summarization quelle di maggior rilievo e più informative per il riassunto da generare. Questo problema nasce per far fronte ai limiti che presentano i modelli di summarization attualmente a disposizione, che possono processare un numero limitato di frasi; sorge quindi la necessità di filtrare le informazioni più rilevanti quando il lavoro si applica a documenti lunghi. Al fine di scandire la metrica di importanza, vengono presi come riferimento metodi sintattici, semantici e basati su rappresentazione a grafi AMR. Il dataset preso come riferimento è Multi-LexSum, che include tre granularità di summarization di testi legali. L’analisi in questione si compone quindi della fase di estrazione delle frasi dai documenti, della misurazione delle metriche stabilite e del passaggio al modello stato dell’arte PRIMERA per l’elaborazione del riassunto. Il testo ottenuto viene poi confrontato con il riassunto target già fornito, considerato come ottimale; lavorando in queste condizioni l’obiettivo è di definire soglie ottimali di upper-bound per l’accuratezza delle metriche, che potrebbero ampliare il lavoro ad analisi più dettagliate qualora queste superino lo stato dell’arte attuale.
Resumo:
Combinatorial decision and optimization problems belong to numerous applications, such as logistics and scheduling, and can be solved with various approaches. Boolean Satisfiability and Constraint Programming solvers are some of the most used ones and their performance is significantly influenced by the model chosen to represent a given problem. This has led to the study of model reformulation methods, one of which is tabulation, that consists in rewriting the expression of a constraint in terms of a table constraint. To apply it, one should identify which constraints can help and which can hinder the solving process. So far this has been performed by hand, for example in MiniZinc, or automatically with manually designed heuristics, in Savile Row. Though, it has been shown that the performances of these heuristics differ across problems and solvers, in some cases helping and in others hindering the solving procedure. However, recent works in the field of combinatorial optimization have shown that Machine Learning (ML) can be increasingly useful in the model reformulation steps. This thesis aims to design a ML approach to identify the instances for which Savile Row’s heuristics should be activated. Additionally, it is possible that the heuristics miss some good tabulation opportunities, so we perform an exploratory analysis for the creation of a ML classifier able to predict whether or not a constraint should be tabulated. The results reached towards the first goal show that a random forest classifier leads to an increase in the performances of 4 different solvers. The experimental results in the second task show that a ML approach could improve the performance of a solver for some problem classes.
Resumo:
Electric vehicles and electronic components inside the vehicle are becoming increasingly important. The software as well starts to have a significant impact on modern high-end cars therefore a careful validation process needs to be implemented with the aim of having a bug free product when it is released. The software complexity increases and thus also the testing phases is more demanding. Test can be troublesome and, in some cases, boring and easy. The intelligence can be moved in test definition and writing rather than on test execution. The aim of this document is to start the definition of an automatic modular testing system capable to execute test cycles on systems that interacts with the CAN networks and with DUT that can be touched with a robotic arm. The document defines a first version of the system, in particular the hardware interface part with the aim of taking logs and execute test in an automated fashion with the test engineer can have a higher focus on the test definition and analysis rather than execution.
Resumo:
High-throughput screening of physical, genetic and chemical-genetic interactions brings important perspectives in the Systems Biology field, as the analysis of these interactions provides new insights into protein/gene function, cellular metabolic variations and the validation of therapeutic targets and drug design. However, such analysis depends on a pipeline connecting different tools that can automatically integrate data from diverse sources and result in a more comprehensive dataset that can be properly interpreted. We describe here the Integrated Interactome System (IIS), an integrative platform with a web-based interface for the annotation, analysis and visualization of the interaction profiles of proteins/genes, metabolites and drugs of interest. IIS works in four connected modules: (i) Submission module, which receives raw data derived from Sanger sequencing (e.g. two-hybrid system); (ii) Search module, which enables the user to search for the processed reads to be assembled into contigs/singlets, or for lists of proteins/genes, metabolites and drugs of interest, and add them to the project; (iii) Annotation module, which assigns annotations from several databases for the contigs/singlets or lists of proteins/genes, generating tables with automatic annotation that can be manually curated; and (iv) Interactome module, which maps the contigs/singlets or the uploaded lists to entries in our integrated database, building networks that gather novel identified interactions, protein and metabolite expression/concentration levels, subcellular localization and computed topological metrics, GO biological processes and KEGG pathways enrichment. This module generates a XGMML file that can be imported into Cytoscape or be visualized directly on the web. We have developed IIS by the integration of diverse databases following the need of appropriate tools for a systematic analysis of physical, genetic and chemical-genetic interactions. IIS was validated with yeast two-hybrid, proteomics and metabolomics datasets, but it is also extendable to other datasets. IIS is freely available online at: http://www.lge.ibi.unicamp.br/lnbio/IIS/.
Resumo:
Often in biomedical research, we deal with continuous (clustered) proportion responses ranging between zero and one quantifying the disease status of the cluster units. Interestingly, the study population might also consist of relatively disease-free as well as highly diseased subjects, contributing to proportion values in the interval [0, 1]. Regression on a variety of parametric densities with support lying in (0, 1), such as beta regression, can assess important covariate effects. However, they are deemed inappropriate due to the presence of zeros and/or ones. To evade this, we introduce a class of general proportion density, and further augment the probabilities of zero and one to this general proportion density, controlling for the clustering. Our approach is Bayesian and presents a computationally convenient framework amenable to available freeware. Bayesian case-deletion influence diagnostics based on q-divergence measures are automatic from the Markov chain Monte Carlo output. The methodology is illustrated using both simulation studies and application to a real dataset from a clinical periodontology study.
Resumo:
Machado-Joseph disease (SCA3) is the most frequent spinocerebellar ataxia worldwide and characterized by remarkable phenotypic heterogeneity. MRI-based studies in SCA3 focused in the cerebellum and connections, but little is known about cord damage in the disease and its clinical relevance. To evaluate the spinal cord damage in SCA3 through quantitative analysis of MRI scans. A group of 48 patients with SCA3 and 48 age and gender-matched healthy controls underwent MRI on a 3T scanner. We used T1-weighted 3D images to estimate the cervical spinal cord area (CA) and eccentricity (CE) at three C2/C3 levels based on a semi-automatic image segmentation protocol. The scale for assessment and rating of ataxia (SARA) was employed to quantify disease severity. The two groups-SCA3 and controls-were significantly different regarding CA (49.5 ± 7.3 vs 67.2 ± 6.3 mm(2), p < 0.001) and CE values (0.79 ± 0.06 vs 0.75 ± 0.05, p = 0.005). In addition, CA presented a significant correlation with SARA scores in the patient group (p = 0.010). CE was not associated with SARA scores (p = 0.857). In the multiple variable regression, we found that disease duration was the only variable associated with CA (coefficient = -0.629, p = 0.025). SCA3 is characterized by cervical cord atrophy and antero-posterior flattening. In addition, the spinal cord areas did correlate with disease severity. This suggests that quantitative analyses of the spinal cord MRI might be a useful biomarker in SCA3.
Resumo:
Base cutting and feeding into harvesters of plants lying close to the ground surface require an efficient sweeping action of the cutting mechanism. It is not the case of conventional sugarcane harvesters which have rigid blades mounted on discs capable to contaminate the cane with dirt as well as damage the ratoons. The objective of this work was to simulate the sweeping performance of a segmented base cutter. The model was developed using the laws of dynamic. Simulation included two rotational speeds (400 and 600 rpm), two cutting heights (0.12 and 0.13 m) and two disk tilting angles (-10º and -12º). The simulated sweeping angle varied between 56º and 193º, which are very promising as a mean to cutting and feeding cane sticks lying on the ground. Cutting height was the variable that affected sweeping action the most. This behavior indicates the need to have an automatic control of the cutting disk height in order to keep good sweeping performance as the harvester moves forward.
Resumo:
Currently, owing to the occurrence of environmental problems, along with the need of environmental preservation, both the territory management of Hydrographic Basin and the conservation of natural resources have proven to have remarkable importance. Thus, the mean goal of the research is to raise and scrutinize social-economic and technologic data from the Mogi Guaçu River Hydrographic Basin (São Paulo, Brazil). The aim is to group municipalities with similar characteristics regarding the collected data, which may direct joint actions in the Hydrographic Basin Management. There were used both the methods of factorial analysis and automatic hierarchical classifications. Additionally, there is going to be applied a Geographical Information System to represent the outcomes of the methods aforementioned, through the evolvement of a geo-referenced database, which will allow the obtainment of information categorically distributed including theme maps of interest. The main characteristics adopted to group the municipalities were: agricultural area, sugar cane production, small farms, animal production, number of agriculture machinery and equipments and agricultural income. The methodology adopted in the Mogi Guaçu River Hydrographic Basin will be analyzed vis-à-vis its appropriateness on basin management, as well as the possibility of assisting the studies on behalf of the São Paulo Hydrographic Basin groups, to regional development.
Resumo:
This paper describes the development of a relational database and a tool for viewing MODIS NDVI temporal profile, using data from MOD09Q1 product, specifically the surface bidirectional reflectance factor relative to the RED and NIR wavelength, mosaic of 8-day temporal composition, and the quality band, in sugarcane fields in the state of São Paulo, for analysis of the late stubble-cane maturation. From sugarcane farms were obtained the historical data about yield, soil, variety, location of the each pixel for each subregion monitored. All data were integrated in a database developed in PostgreSQL. The tool was implemented using Java language and allowed a fast and automatic way of analyzing sugarcane phenological patterns. It concluded that the MODIS NDVI temporal profile using data from MOD09Q1 product is able to subsidize the monitoring of phenological changes in the sugarcane.
Resumo:
Universidade Estadual de Campinas . Faculdade de Educação Física