975 resultados para Post processing
Resumo:
There have been over 3000 bridge weigh-in-motion (B-WIM) installations in 25 countries worldwide, this has led vast improvements in post processing of B-WIM systems since its introduction in the 1970’s. This paper introduces a new low-power B-WIM system using fibre optic sensors (FOS). The system consisted of a series of FOS which were attached to the soffit of an existing integral bridge with a single span of 19m. The site selection criteria and full installation process has been detailed in the paper. A method of calibration was adopted using live traffic at the bridge site and based on this calibration the accuracy of the system was determined.
Resumo:
Wir betrachten zeitabhängige Konvektions-Diffusions-Reaktions-Gleichungen in zeitabhängi- gen Gebieten, wobei die Bewegung des Gebietsrandes bekannt ist. Die zeitliche Entwicklung des Gebietes wird durch die ALE-Formulierung behandelt, die die Nachteile der klassischen Euler- und Lagrange-Betrachtungsweisen behebt. Die Position des Randes und seine Geschwindigkeit werden dabei so in das Gebietsinnere fortgesetzt, dass starke Gitterdeformationen verhindert werden. Als Zeitdiskretisierungen höherer Ordnung werden stetige Galerkin-Petrov-Verfahren (cGP) und unstetige Galerkin-Verfahren (dG) auf Probleme in zeitabhängigen Gebieten angewendet. Weiterhin werden das C 1 -stetige Galerkin-Petrov-Verfahren und das C 0 -stetige Galerkin- Verfahren vorgestellt. Deren Lösungen lassen sich auch in zeitabhängigen Gebieten durch ein einfaches einheitliches Postprocessing aus der Lösung des cGP-Problems bzw. dG-Problems erhalten. Für Problemstellungen in festen Gebieten und mit zeitlich konstanten Konvektions- und Reaktionstermen werden Stabilitätsresultate sowie optimale Fehlerabschätzungen für die nachbereiteten Lösungen der cGP-Verfahren und der dG-Verfahren angegeben. Für zeitabhängige Konvektions-Diffusions-Reaktions-Gleichungen in zeitabhängigen Gebieten präsentieren wir konservative und nicht-konservative Formulierungen, wobei eine besondere Aufmerksamkeit der Behandlung der Zeitableitung und der Gittergeschwindigkeit gilt. Stabilität und optimale Fehlerschätzungen für die in der Zeit semi-diskretisierten konservativen und nicht-konservativen Formulierungen werden vorgestellt. Abschließend wird das volldiskretisierte Problem betrachtet, wobei eine Finite-Elemente-Methode zur Ortsdiskretisierung der Konvektions-Diffusions-Reaktions-Gleichungen in zeitabhängigen Gebieten im ALE-Rahmen einbezogen wurde. Darüber hinaus wird eine lokale Projektionsstabilisierung (LPS) eingesetzt, um der Konvektionsdominanz Rechnung zu tragen. Weiterhin wird numerisch untersucht, wie sich die Approximation der Gebietsgeschwindigkeit auf die Genauigkeit der Zeitdiskretisierungsverfahren auswirkt.
Resumo:
Il seguente elaborato di tesi tratta il problema della pianificazione di voli fotogrammetrici a bassa quota mediante l’uso di SAPR, in particolare è presentata una disamina delle principali applicazioni che permettono di programmare una copertura fotogrammetrica trasversale e longitudinale di un certo poligono con un drone commerciale. Il tema principale sviluppato è la gestione di un volo fotogrammetrico UAV mediante l’uso di applicativi software che permettono all’utente di inserire i parametri di volo in base alla tipologia di rilievo che vuole effettuare. L’obbiettivo finale è quello di ottenere una corretta presa fotogrammetrica da utilizzare per la creazione di un modello digitale del terreno o di un oggetto attraverso elaborazione dati in post-processing. La perfetta configurazione del volo non può prescindere dalle conoscenze base di fotogrammetria e delle meccaniche di un veicolo UAV. I capitoli introduttivi tratteranno infatti i principi della fotogrammetria analogica e digitale soffermandosi su temi utili alla comprensione delle problematiche relative al progetto di rilievo fotogrammetrico aereo. Una particolare attenzione è stata posta sulle nozioni di fotogrammetria digitale che, insieme agli algoritmi di Imagine Matching derivanti dalla Computer Vision, permette di definire il ramo della Fotogrammetria Moderna. Nei capitoli centrali verranno esaminate e confrontate una serie di applicazioni commerciali per smartphone e tablet, disponibili per sistemi Apple e Android, per trarne un breve resoconto conclusivo che le compari in termini di accessibilità, potenzialità e destinazione d’uso. Per una maggiore comprensione si determinano univocamente gli acronimi con cui i droni vengono chiamati nei diversi contesti: UAV (Unmanned Aerial Vehicle), SAPR (Sistemi Aeromobili a Pilotaggio Remoto), RPAS (Remotely Piloted Aicraft System), ARP (Aeromobili a Pilotaggio Remoto).
Resumo:
The overwhelming amount and unprecedented speed of publication in the biomedical domain make it difficult for life science researchers to acquire and maintain a broad view of the field and gather all information that would be relevant for their research. As a response to this problem, the BioNLP (Biomedical Natural Language Processing) community of researches has emerged and strives to assist life science researchers by developing modern natural language processing (NLP), information extraction (IE) and information retrieval (IR) methods that can be applied at large-scale, to scan the whole publicly available biomedical literature and extract and aggregate the information found within, while automatically normalizing the variability of natural language statements. Among different tasks, biomedical event extraction has received much attention within BioNLP community recently. Biomedical event extraction constitutes the identification of biological processes and interactions described in biomedical literature, and their representation as a set of recursive event structures. The 2009–2013 series of BioNLP Shared Tasks on Event Extraction have given raise to a number of event extraction systems, several of which have been applied at a large scale (the full set of PubMed abstracts and PubMed Central Open Access full text articles), leading to creation of massive biomedical event databases, each of which containing millions of events. Sinece top-ranking event extraction systems are based on machine-learning approach and are trained on the narrow-domain, carefully selected Shared Task training data, their performance drops when being faced with the topically highly varied PubMed and PubMed Central documents. Specifically, false-positive predictions by these systems lead to generation of incorrect biomolecular events which are spotted by the end-users. This thesis proposes a novel post-processing approach, utilizing a combination of supervised and unsupervised learning techniques, that can automatically identify and filter out a considerable proportion of incorrect events from large-scale event databases, thus increasing the general credibility of those databases. The second part of this thesis is dedicated to a system we developed for hypothesis generation from large-scale event databases, which is able to discover novel biomolecular interactions among genes/gene-products. We cast the hypothesis generation problem as a supervised network topology prediction, i.e predicting new edges in the network, as well as types and directions for these edges, utilizing a set of features that can be extracted from large biomedical event networks. Routine machine learning evaluation results, as well as manual evaluation results suggest that the problem is indeed learnable. This work won the Best Paper Award in The 5th International Symposium on Languages in Biology and Medicine (LBM 2013).
Resumo:
Kandidaatintyön tarkoituksena oli selvittää pienahitsin juuren kriittisyyttä. Työ oli saanut aiheen rakenneputki kokeiden yhteydessä tehdyistä havainnoista. Työssä tutustuttiin millaiset ovat pienahitsin mitoitus menetelmät ja tausta tutkimusta kuinka sitä sovelletaan käytäntöön suurlujuusteräksille. Työssä esitellään käytetyt tutkimusmenetelmät kuinka menetelmätriangulaatio saavutettiin. Tutkimuskysymyksenä oli hitsien kestävyyden mitoituksen riittävyys. Tutkimukset suoritettiin tarkastellen staattisesti kuormitettuja pienahitsejä. Pienahitsi kappaleista tehtiin laboratoriokoekappale ja FEM-laskentamalli joista vertailtiin tuloksia. Laboratoriokokeessa mittaus menetelmänä käytettiin DIC-mittausta, jolle voitiin tehdä jälkikäsittelyjä ja sieltä määrittää haluttuja datapisteitä. Laskennassa suurimmat jännityskeskittymät syntyivät hitsin kohdalle mutta vetokokeessa koekappaleeseen syntyi vauriot sularajalle ja vetokorvakkeen kiinnityshitsin rajaviivalle. Tällä kohtaa todettiin materiaalimalli riittämättömäksi, koska siihen ei ollut määritelty muutosvyöhykkeen parametreja.
Resumo:
Optical full-field measurement methods such as Digital Image Correlation (DIC) provide a new opportunity for measuring deformations and vibrations with high spatial and temporal resolution. However, application to full-scale wind turbines is not trivial. Elaborate preparation of the experiment is vital and sophisticated post processing of the DIC results essential. In the present study, a rotor blade of a 3.2 MW wind turbine is equipped with a random black-and-white dot pattern at four different radial positions. Two cameras are located in front of the wind turbine and the response of the rotor blade is monitored using DIC for different turbine operations. In addition, a Light Detection and Ranging (LiDAR) system is used in order to measure the wind conditions. Wind fields are created based on the LiDAR measurements and used to perform aeroelastic simulations of the wind turbine by means of advanced multibody codes. The results from the optical DIC system appear plausible when checked against common and expected results. In addition, the comparison of relative out-of-plane blade deflections shows good agreement between DIC results and aeroelastic simulations.
Resumo:
One of the most exciting discoveries in astrophysics of the last last decade is of the sheer diversity of planetary systems. These include "hot Jupiters", giant planets so close to their host stars that they orbit once every few days; "Super-Earths", planets with sizes intermediate to those of Earth and Neptune, of which no analogs exist in our own solar system; multi-planet systems with planets smaller than Mars to larger than Jupiter; planets orbiting binary stars; free-floating planets flying through the emptiness of space without any star; even planets orbiting pulsars. Despite these remarkable discoveries, the field is still young, and there are many areas about which precious little is known. In particular, we don't know the planets orbiting Sun-like stars nearest to our own solar system, and we know very little about the compositions of extrasolar planets. This thesis provides developments in those directions, through two instrumentation projects.
The first chapter of this thesis concerns detecting planets in the Solar neighborhood using precision stellar radial velocities, also known as the Doppler technique. We present an analysis determining the most efficient way to detect planets considering factors such as spectral type, wavelengths of observation, spectrograph resolution, observing time, and instrumental sensitivity. We show that G and K dwarfs observed at 400-600 nm are the best targets for surveys complete down to a given planet mass and out to a specified orbital period. Overall we find that M dwarfs observed at 700-800 nm are the best targets for habitable-zone planets, particularly when including the effects of systematic noise floors caused by instrumental imperfections. Somewhat surprisingly, we demonstrate that a modestly sized observatory, with a dedicated observing program, is up to the task of discovering such planets.
We present just such an observatory in the second chapter, called the "MINiature Exoplanet Radial Velocity Array," or MINERVA. We describe the design, which uses a novel multi-aperture approach to increase stability and performance through lower system etendue, as well as keeping costs and time to deployment down. We present calculations of the expected planet yield, and data showing the system performance from our testing and development of the system at Caltech's campus. We also present the motivation, design, and performance of a fiber coupling system for the array, critical for efficiently and reliably bringing light from the telescopes to the spectrograph. We finish by presenting the current status of MINERVA, operational at Mt. Hopkins observatory in Arizona.
The second part of this thesis concerns a very different method of planet detection, direct imaging, which involves discovery and characterization of planets by collecting and analyzing their light. Directly analyzing planetary light is the most promising way to study their atmospheres, formation histories, and compositions. Direct imaging is extremely challenging, as it requires a high performance adaptive optics system to unblur the point-spread function of the parent star through the atmosphere, a coronagraph to suppress stellar diffraction, and image post-processing to remove non-common path "speckle" aberrations that can overwhelm any planetary companions.
To this end, we present the "Stellar Double Coronagraph," or SDC, a flexible coronagraphic platform for use with the 200" Hale telescope. It has two focal and pupil planes, allowing for a number of different observing modes, including multiple vortex phase masks in series for improved contrast and inner working angle behind the obscured aperture of the telescope. We present the motivation, design, performance, and data reduction pipeline of the instrument. In the following chapter, we present some early science results, including the first image of a companion to the star delta Andromeda, which had been previously hypothesized but never seen.
A further chapter presents a wavefront control code developed for the instrument, using the technique of "speckle nulling," which can remove optical aberrations from the system using the deformable mirror of the adaptive optics system. This code allows for improved contrast and inner working angles, and was written in a modular style so as to be portable to other high contrast imaging platforms. We present its performance on optical, near-infrared, and thermal infrared instruments on the Palomar and Keck telescopes, showing how it can improve contrasts by a factor of a few in less than ten iterations.
One of the large challenges in direct imaging is sensing and correcting the electric field in the focal plane to remove scattered light that can be much brighter than any planets. In the last chapter, we present a new method of focal-plane wavefront sensing, combining a coronagraph with a simple phase-shifting interferometer. We present its design and implementation on the Stellar Double Coronagraph, demonstrating its ability to create regions of high contrast by measuring and correcting for optical aberrations in the focal plane. Finally, we derive how it is possible to use the same hardware to distinguish companions from speckle errors using the principles of optical coherence. We present results observing the brown dwarf HD 49197b, demonstrating the ability to detect it despite it being buried in the speckle noise floor. We believe this is the first detection of a substellar companion using the coherence properties of light.
Resumo:
This thesis focuses on advanced reconstruction methods and Dual Energy (DE) Computed Tomography (CT) applications for proton therapy, aiming at improving patient positioning and investigating approaches to deal with metal artifacts. To tackle the first goal, an algorithm for post-processing input DE images has been developed. The outputs are tumor- and bone-canceled images, which help in recognising structures in patient body. We proved that positioning error is substantially reduced using contrast enhanced images, thus suggesting the potential of such application. If positioning plays a key role in the delivery, even more important is the quality of planning CT. For that, modern CT scanners offer possibility to tackle challenging cases, like treatment of tumors close to metal implants. Possible approaches for dealing with artifacts introduced by such rods have been investigated experimentally at Paul Scherrer Institut (Switzerland), simulating several treatment plans on an anthropomorphic phantom. In particular, we examined the cases in which none, manual or Iterative Metal Artifact Reduction (iMAR) algorithm were used to correct the artifacts, using both Filtered Back Projection and Sinogram Affirmed Iterative Reconstruction as image reconstruction techniques. Moreover, direct stopping power calculation from DE images with iMAR has also been considered as alternative approach. Delivered dose measured with Gafchromic EBT3 films was compared with the one calculated in Treatment Planning System. Residual positioning errors, daily machine dependent uncertainties and film quenching have been taken into account in the analyses. Although plans with multiple fields seemed more robust than single field, results showed in general better agreement between prescribed and delivered dose when using iMAR, especially if combined with DE approach. Thus, we proved the potential of these advanced algorithms in improving dosimetry for plans in presence of metal implants.
Resumo:
Recent developments have made researchers to reconsider Lagrangian measurement techniques as an alternative to their Eulerian counterpart when investigating non-stationary flows. This thesis advances the state-of-the-art of Lagrangian measurement techniques by pursuing three different objectives: (i) developing new Lagrangian measurement techniques for difficult-to-measure, in situ flow environments; (ii) developing new post-processing strategies designed for unstructured Lagrangian data, as well as providing guidelines towards their use; and (iii) presenting the advantages that the Lagrangian framework has over their Eulerian counterpart in various non-stationary flow problems. Towards the first objective, a large-scale particle tracking velocimetry apparatus is designed for atmospheric surface layer measurements. Towards the second objective, two techniques, one for identifying Lagrangian Coherent Structures (LCS) and the other for characterizing entrainment directly from unstructured Lagrangian data, are developed. Finally, towards the third objective, the advantages of Lagrangian-based measurements are showcased in two unsteady flow problems: the atmospheric surface layer, and entrainment in a non-stationary turbulent flow. Through developing new experimental and post-processing strategies for Lagrangian data, and through showcasing the advantages of Lagrangian data in various non-stationary flows, the thesis works to help investigators to more easily adopt Lagrangian-based measurement techniques.
Resumo:
Bangla OCR (Optical Character Recognition) is a long deserving software for Bengali community all over the world. Numerous e efforts suggest that due to the inherent complex nature of Bangla alphabet and its word formation process development of high fidelity OCR producing a reasonably acceptable output still remains a challenge. One possible way of improvement is by using post processing of OCR’s output; algorithms such as Edit Distance and the use of n-grams statistical information have been used to rectify misspelled words in language processing. This work presents the first known approach to use these algorithms to replace misrecognized words produced by Bangla OCR. The assessment is made on a set of fifty documents written in Bangla script and uses a dictionary of 541,167 words. The proposed correction model can correct several words lowering the recognition error rate by 2.87% and 3.18% for the character based n- gram and edit distance algorithms respectively. The developed system suggests a list of 5 (five) alternatives for a misspelled word. It is found that in 33.82% cases, the correct word is the topmost suggestion of 5 words list for n-gram algorithm while using Edit distance algorithm the first word in the suggestion properly matches 36.31% of the cases. This work will ignite rooms of thoughts for possible improvements in character recognition endeavour.
Resumo:
La seguente tesi nasce dall’esigenza di ottimizzare, da un punto di vista acustico e prestazionale, un ventilatore centrifugo preesistente in azienda. Nei primi tre capitoli si è analizzato il problema da un punto di vista teorico, mentre nel terzo e quarto capitolo da un punto di vista computazionale sfruttando tecniche CFD. Nel primo capitolo è stata fatta una trattazione generale dei ventilatori centrifughi, concentrandosi sul tipo di problematiche a cui questi vanno incontro. Nel secondo capitolo è stata presentata la teoria che sta alla base di una rilevazione sperimentale e di un’analisi acustica. Unitamente a ciò sono stati riportati alcuni articoli che mostrano tecniche di ottimizzazione acustica in ventilatori centrifughi. Nel terzo capitolo è stata riassunta la teoria alla base della fluidodinamica e di uno studio fluidodinamico. Nel quarto capitolo viene spiegato come è stato creato il modello fluidodinamico. Si è optato per un’analisi del problema in stato stazionario, sfruttando il Moving Reference Frame, e considerando l’aria come incomprimibile, visto il ridotto numero di Mach. L’analisi acustica è stata effettuata nel post-processing sfruttando il modello di Proudman. Infine è stata dimostrata la correlazione che intercorre tra i tre punti della curva resistente del ventilatore di funzionamento reale, permettendo di estendere i risultati ricavati dalla analisi di uno di questi agli altri due. Nel quinto capitolo è stata effettuata un’analisi dei risultati ottenuti dalle simulazioni fluidodinamiche e sono state proposte diverse modifiche della geometria. La modifica scelta ha visto un miglioramento delle prestazioni e una minore rumorosità. Infine sono state proposte nelle conclusioni ulteriori possibili strade da percorre per un’indagine e ottimizzazione del ventilatore più accurata.
Resumo:
In the Era of precision medicine and big medical data sharing, it is necessary to solve the work-flow of digital radiological big data in a productive and effective way. In particular, nowadays, it is possible to extract information “hidden” in digital images, in order to create diagnostic algorithms helping clinicians to set up more personalized therapies, which are in particular targets of modern oncological medicine. Digital images generated by the patient have a “texture” structure that is not visible but encrypted; it is “hidden” because it cannot be recognized by sight alone. Thanks to artificial intelligence, pre- and post-processing software and generation of mathematical calculation algorithms, we could perform a classification based on non-visible data contained in radiological images. Being able to calculate the volume of tissue body composition could lead to creating clasterized classes of patients inserted in standard morphological reference tables, based on human anatomy distinguished by gender and age, and maybe in future also by race. Furthermore, the branch of “morpho-radiology" is a useful modality to solve problems regarding personalized therapies, which is particularly needed in the oncological field. Actually oncological therapies are no longer based on generic drugs but on target personalized therapy. The lack of gender and age therapies table could be filled thanks to morpho-radiology data analysis application.
Resumo:
Turbulent plasmas inside tokamaks are modeled and studied using guiding center theory, applied to charged test particles, in a Hamiltonian framework. The equations of motion for the guiding center dynamics, under the conditions of a constant and uniform magnetic field and turbulent electrostatic field are derived by averaging over the fast gyroangle, for the first and second order in the guiding center potential, using invertible changes of coordinates such as Lie transforms. The equations of motion are then made dimensionless, exploiting temporal and spatial periodicities of the model chosen for the electrostatic potential. They are implemented numerically in Python. Fast Fourier Transform and its inverse are used. Improvements to the original Python scripts are made, notably the introduction of a power-law curve fitting to account for anomalous diffusion, the possibility to integrate the equations in two steps to save computational time by removing trapped trajectories, and the implementation of multicolored stroboscopic plots to distinguish between trapped and untrapped guiding centers. The post-processing of the results is made in MATLAB. The values and ranges of the parameters chosen for the simulations are selected based on numerous simulations used as feedback tools. In particular, a recurring value for the threshold to detect trapped trajectories is evidenced. Effects of the Larmor radius, the amplitude of the guiding center potential and the intensity of its second order term are studied by analyzing their diffusive regimes, their stroboscopic plots and the shape of guiding center potentials. The main result is the identification of cases anomalous diffusion depending on the values of the parameters (mostly the Larmor radius). The transitions between diffusive regimes are identified. The presence of highways for the super-diffusive trajectories are unveiled. The influence of the charge on these transitions from diffusive to ballistic behaviors is analyzed.
Resumo:
In recent years, developed countries have turned their attention to clean and renewable energy, such as wind energy and wave energy that can be converted to electrical power. Companies and academic groups worldwide are investigating several wave energy ideas today. Accordingly, this thesis studies the numerical simulation of the dynamic response of the wave energy converters (WECs) subjected to the ocean waves. This study considers a two-body point absorber (2BPA) and an oscillating surge wave energy converter (OSWEC). The first aim is to mesh the bodies of the earlier mentioned WECs to calculate their hydrostatic properties using axiMesh.m and Mesh.m functions provided by NEMOH. The second aim is to calculate the first-order hydrodynamic coefficients of the WECs using the NEMOH BEM solver and to study the ability of this method to eliminate irregular frequencies. The third is to generate a *.h5 file for 2BPA and OSWEC devices, in which all the hydrodynamic data are included. The BEMIO, a pre-and post-processing tool developed by WEC-Sim, is used in this study to create *.h5 files. The primary and final goal is to run the wave energy converter Simulator (WEC-Sim) to simulate the dynamic responses of WECs studied in this thesis and estimate their power performance at different sites located in the Mediterranean Sea and the North Sea. The hydrodynamic data obtained by the NEMOH BEM solver for the 2BPA and OSWEC devices studied in this thesis is imported to WEC-Sim using BEMIO. Lastly, the power matrices and annual energy production (AEP) of WECs are estimated for different sites located in the Sea of Sicily, Sea of Sardinia, Adriatic Sea, Tyrrhenian Sea, and the North Sea. To this end, the NEMOH and WEC-Sim are still the most practical tools to estimate the power generation of WECs numerically.
Resumo:
Activation functions within neural networks play a crucial role in Deep Learning since they allow to learn complex and non-trivial patterns in the data. However, the ability to approximate non-linear functions is a significant limitation when implementing neural networks in a quantum computer to solve typical machine learning tasks. The main burden lies in the unitarity constraint of quantum operators, which forbids non-linearity and poses a considerable obstacle to developing such non-linear functions in a quantum setting. Nevertheless, several attempts have been made to tackle the realization of the quantum activation function in the literature. Recently, the idea of the QSplines has been proposed to approximate a non-linear activation function by implementing the quantum version of the spline functions. Yet, QSplines suffers from various drawbacks. Firstly, the final function estimation requires a post-processing step; thus, the value of the activation function is not available directly as a quantum state. Secondly, QSplines need many error-corrected qubits and a very long quantum circuits to be executed. These constraints do not allow the adoption of the QSplines on near-term quantum devices and limit their generalization capabilities. This thesis aims to overcome these limitations by leveraging hybrid quantum-classical computation. In particular, a few different methods for Variational Quantum Splines are proposed and implemented, to pave the way for the development of complete quantum activation functions and unlock the full potential of quantum neural networks in the field of quantum machine learning.