917 resultados para post-processing method
Resumo:
GPS technology has been embedded into portable, low-cost electronic devices nowadays to track the movements of mobile objects. This implication has greatly impacted the transportation field by creating a novel and rich source of traffic data on the road network. Although the promise offered by GPS devices to overcome problems like underreporting, respondent fatigue, inaccuracies and other human errors in data collection is significant; the technology is still relatively new that it raises many issues for potential users. These issues tend to revolve around the following areas: reliability, data processing and the related application. This thesis aims to study the GPS tracking form the methodological, technical and practical aspects. It first evaluates the reliability of GPS based traffic data based on data from an experiment containing three different traffic modes (car, bike and bus) traveling along the road network. It then outline the general procedure for processing GPS tracking data and discuss related issues that are uncovered by using real-world GPS tracking data of 316 cars. Thirdly, it investigates the influence of road network density in finding optimal location for enhancing travel efficiency and decreasing travel cost. The results show that the geographical positioning is reliable. Velocity is slightly underestimated, whereas altitude measurements are unreliable.Post processing techniques with auxiliary information is found necessary and important when solving the inaccuracy of GPS data. The densities of the road network influence the finding of optimal locations. The influence will stabilize at a certain level and do not deteriorate when the node density is higher.
Resumo:
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)
Resumo:
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)
Resumo:
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
Pós-graduação em Odontologia Restauradora - ICT
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
Stress recovery techniques have been an active research topic in the last few years since, in 1987, Zienkiewicz and Zhu proposed a procedure called Superconvergent Patch Recovery (SPR). This procedure is a last-squares fit of stresses at super-convergent points over patches of elements and it leads to enhanced stress fields that can be used for evaluating finite element discretization errors. In subsequent years, numerous improved forms of this procedure have been proposed attempting to add equilibrium constraints to improve its performances. Later, another superconvergent technique, called Recovery by Equilibrium in Patches (REP), has been proposed. In this case the idea is to impose equilibrium in a weak form over patches and solve the resultant equations by a last-square scheme. In recent years another procedure, based on minimization of complementary energy, called Recovery by Compatibility in Patches (RCP) has been proposed in. This procedure, in many ways, can be seen as the dual form of REP as it substantially imposes compatibility in a weak form among a set of self-equilibrated stress fields. In this thesis a new insight in RCP is presented and the procedure is improved aiming at obtaining convergent second order derivatives of the stress resultants. In order to achieve this result, two different strategies and their combination have been tested. The first one is to consider larger patches in the spirit of what proposed in [4] and the second one is to perform a second recovery on the recovered stresses. Some numerical tests in plane stress conditions are presented, showing the effectiveness of these procedures. Afterwards, a new recovery technique called Last Square Displacements (LSD) is introduced. This new procedure is based on last square interpolation of nodal displacements resulting from the finite element solution. In fact, it has been observed that the major part of the error affecting stress resultants is introduced when shape functions are derived in order to obtain strains components from displacements. This procedure shows to be ultraconvergent and is extremely cost effective, as it needs in input only nodal displacements directly coming from finite element solution, avoiding any other post-processing in order to obtain stress resultants using the traditional method. Numerical tests in plane stress conditions are than presented showing that the procedure is ultraconvergent and leads to convergent first and second order derivatives of stress resultants. In the end, transverse stress profiles reconstruction using First-order Shear Deformation Theory for laminated plates and three dimensional equilibrium equations is presented. It can be seen that accuracy of this reconstruction depends on accuracy of first and second derivatives of stress resultants, which is not guaranteed by most of available low order plate finite elements. RCP and LSD procedures are than used to compute convergent first and second order derivatives of stress resultants ensuring convergence of reconstructed transverse shear and normal stress profiles respectively. Numerical tests are presented and discussed showing the effectiveness of both procedures.
Resumo:
Among the experimental methods commonly used to define the behaviour of a full scale system, dynamic tests are the most complete and efficient procedures. A dynamic test is an experimental process, which would define a set of characteristic parameters of the dynamic behaviour of the system, such as natural frequencies of the structure, mode shapes and the corresponding modal damping values associated. An assessment of these modal characteristics can be used both to verify the theoretical assumptions of the project, to monitor the performance of the structural system during its operational use. The thesis is structured in the following chapters: The first introductive chapter recalls some basic notions of dynamics of structure, focusing the discussion on the problem of systems with multiply degrees of freedom (MDOF), which can represent a generic real system under study, when it is excited with harmonic force or in free vibration. The second chapter is entirely centred on to the problem of dynamic identification process of a structure, if it is subjected to an experimental test in forced vibrations. It first describes the construction of FRF through classical FFT of the recorded signal. A different method, also in the frequency domain, is subsequently introduced; it allows accurately to compute the FRF using the geometric characteristics of the ellipse that represents the direct input-output comparison. The two methods are compared and then the attention is focused on some advantages of the proposed methodology. The third chapter focuses on the study of real structures when they are subjected to experimental test, where the force is not known, like in an ambient or impact test. In this analysis we decided to use the CWT, which allows a simultaneous investigation in the time and frequency domain of a generic signal x(t). The CWT is first introduced to process free oscillations, with excellent results both in terms of frequencies, dampings and vibration modes. The application in the case of ambient vibrations defines accurate modal parameters of the system, although on the damping some important observations should be made. The fourth chapter is still on the problem of post processing data acquired after a vibration test, but this time through the application of discrete wavelet transform (DWT). In the first part the results obtained by the DWT are compared with those obtained by the application of CWT. Particular attention is given to the use of DWT as a tool for filtering the recorded signal, in fact in case of ambient vibrations the signals are often affected by the presence of a significant level of noise. The fifth chapter focuses on another important aspect of the identification process: the model updating. In this chapter, starting from the modal parameters obtained from some environmental vibration tests, performed by the University of Porto in 2008 and the University of Sheffild on the Humber Bridge in England, a FE model of the bridge is defined, in order to define what type of model is able to capture more accurately the real dynamic behaviour of the bridge. The sixth chapter outlines the necessary conclusions of the presented research. They concern the application of a method in the frequency domain in order to evaluate the modal parameters of a structure and its advantages, the advantages in applying a procedure based on the use of wavelet transforms in the process of identification in tests with unknown input and finally the problem of 3D modeling of systems with many degrees of freedom and with different types of uncertainty.
Resumo:
Con il trascorrere del tempo, le reti di stazioni permanenti GNSS (Global Navigation Satellite System) divengono sempre più un valido supporto alle tecniche di rilevamento satellitare. Esse sono al tempo stesso un’efficace materializzazione del sistema di riferimento e un utile ausilio ad applicazioni di rilevamento topografico e di monitoraggio per il controllo di deformazioni. Alle ormai classiche applicazioni statiche in post-processamento, si affiancano le misure in tempo reale sempre più utilizzate e richieste dall’utenza professionale. In tutti i casi risulta molto importante la determinazione di coordinate precise per le stazioni permanenti, al punto che si è deciso di effettuarla tramite differenti ambienti di calcolo. Sono stati confrontati il Bernese, il Gamit (che condividono l’approccio differenziato) e il Gipsy (che utilizza l’approccio indifferenziato). L’uso di tre software ha reso indispensabile l’individuazione di una strategia di calcolo comune in grado di garantire che, i dati ancillari e i parametri fisici adottati, non costituiscano fonte di diversificazione tra le soluzioni ottenute. L’analisi di reti di dimensioni nazionali oppure di reti locali per lunghi intervalli di tempo, comporta il processamento di migliaia se non decine di migliaia di file; a ciò si aggiunge che, talora a causa di banali errori, oppure al fine di elaborare test scientifici, spesso risulta necessario reiterare le elaborazioni. Molte risorse sono quindi state investite nella messa a punto di procedure automatiche finalizzate, da un lato alla preparazione degli archivi e dall’altro all’analisi dei risultati e al loro confronto qualora si sia in possesso di più soluzioni. Dette procedure sono state sviluppate elaborando i dataset più significativi messi a disposizione del DISTART (Dipartimento di Ingegneria delle Strutture, dei Trasporti, delle Acque, del Rilevamento del Territorio - Università di Bologna). E’ stato così possibile, al tempo stesso, calcolare la posizione delle stazioni permanenti di alcune importanti reti locali e nazionali e confrontare taluni fra i più importanti codici scientifici che assolvono a tale funzione. Per quanto attiene il confronto fra i diversi software si è verificato che: • le soluzioni ottenute dal Bernese e da Gamit (i due software differenziati) sono sempre in perfetto accordo; • le soluzioni Gipsy (che utilizza il metodo indifferenziato) risultano, quasi sempre, leggermente più disperse rispetto a quelle degli altri software e mostrano talvolta delle apprezzabili differenze numeriche rispetto alle altre soluzioni, soprattutto per quanto attiene la coordinata Est; le differenze sono però contenute in pochi millimetri e le rette che descrivono i trend sono comunque praticamente parallele a quelle degli altri due codici; • il citato bias in Est tra Gipsy e le soluzioni differenziate, è più evidente in presenza di determinate combinazioni Antenna/Radome e sembra essere legato all’uso delle calibrazioni assolute da parte dei diversi software. E’ necessario altresì considerare che Gipsy è sensibilmente più veloce dei codici differenziati e soprattutto che, con la procedura indifferenziata, il file di ciascuna stazione di ciascun giorno, viene elaborato indipendentemente dagli altri, con evidente maggior elasticità di gestione: se si individua un errore strumentale su di una singola stazione o se si decide di aggiungere o togliere una stazione dalla rete, non risulta necessario il ricalcolo dell’intera rete. Insieme alle altre reti è stato possibile analizzare la Rete Dinamica Nazionale (RDN), non solo i 28 giorni che hanno dato luogo alla sua prima definizione, bensì anche ulteriori quattro intervalli temporali di 28 giorni, intercalati di sei mesi e che coprono quindi un intervallo temporale complessivo pari a due anni. Si è così potuto verificare che la RDN può essere utilizzata per l’inserimento in ITRF05 (International Terrestrial Reference Frame) di una qualsiasi rete regionale italiana nonostante l’intervallo temporale ancora limitato. Da un lato sono state stimate le velocità ITRF (puramente indicative e non ufficiali) delle stazioni RDN e, dall’altro, è stata effettuata una prova di inquadramento di una rete regionale in ITRF, tramite RDN, e si è verificato che non si hanno differenze apprezzabili rispetto all’inquadramento in ITRF, tramite un congruo numero di stazioni IGS/EUREF (International GNSS Service / European REference Frame, SubCommission for Europe dello International Association of Geodesy).
Resumo:
The identification of people by measuring some traits of individual anatomy or physiology has led to a specific research area called biometric recognition. This thesis is focused on improving fingerprint recognition systems considering three important problems: fingerprint enhancement, fingerprint orientation extraction and automatic evaluation of fingerprint algorithms. An effective extraction of salient fingerprint features depends on the quality of the input fingerprint. If the fingerprint is very noisy, we are not able to detect a reliable set of features. A new fingerprint enhancement method, which is both iterative and contextual, is proposed. This approach detects high-quality regions in fingerprints, selectively applies contextual filtering and iteratively expands like wildfire toward low-quality ones. A precise estimation of the orientation field would greatly simplify the estimation of other fingerprint features (singular points, minutiae) and improve the performance of a fingerprint recognition system. The fingerprint orientation extraction is improved following two directions. First, after the introduction of a new taxonomy of fingerprint orientation extraction methods, several variants of baseline methods are implemented and, pointing out the role of pre- and post- processing, we show how to improve the extraction. Second, the introduction of a new hybrid orientation extraction method, which follows an adaptive scheme, allows to improve significantly the orientation extraction in noisy fingerprints. Scientific papers typically propose recognition systems that integrate many modules and therefore an automatic evaluation of fingerprint algorithms is needed to isolate the contributions that determine an actual progress in the state-of-the-art. The lack of a publicly available framework to compare fingerprint orientation extraction algorithms, motivates the introduction of a new benchmark area called FOE (including fingerprints and manually-marked orientation ground-truth) along with fingerprint matching benchmarks in the FVC-onGoing framework. The success of such framework is discussed by providing relevant statistics: more than 1450 algorithms submitted and two international competitions.
Resumo:
The heart is a wonderful but complex organ: it uses electrochemical mechanisms in order to produce mechanical energy to pump the blood throughout the body and allow the life of humans and animals. This organ can be subject to several diseases and sudden cardiac death (SCD) is the most catastrophic manifestation of these diseases, responsible for the death of a large number of people throughout the world. It is estimated that 325000 Americans annually die for SCD. SCD most commonly occurs as a result of reentrant tachyarrhythmias (ventricular tachycardia (VT) and ventricular fibrillation (VF)) and the identification of those patients at higher risk for the development of SCD has been a difficult clinical challenge. Nowadays, a particular electrocardiogram (ECG) abnormality, “T-wave alternans” (TWA), is considered a precursor of lethal cardiac arrhythmias and sudden death, a sensitive indicator of risk for SCD. TWA is defined as a beat-to-beat alternation in the shape, amplitude, or timing of the T-wave on the ECG, indicative of the underlying repolarization of cardiac cells [5]. In other words TWA is the macroscopic effect of subcellular and celluar mechanisms involving ionic kinetics and the consequent depolarization and repolarization of the myocytes. Experimental activities have shown that TWA on the ECG is a manifestation of an underlying alternation of long and short action potential durations (APDs), the so called APD-alternans, of cardiac myocytes in the myocardium. Understanding the mechanism of APDs-alternans is the first step for preventing them to occur. In order to investigate these mechanisms it’s very important to understand that the biological systems are complex systems and their macroscopic properties arise from the nonlinear interactions among the parts. The whole is greater than the sum of the parts, and it cannot be understood only by studying the single parts. In this sense the heart is a complex nonlinear system and its way of working follows nonlinear dynamics; alternans also, they are a manifestation of a phenomenon typical in nonlinear dynamical systems, called “period-dubling bifurcation”. Over the past decade, it has been demonstrated that electrical alternans in cardiac tissue is an important marker for the development of ventricular fibrillation and a significant predictor for mortality. It has been observed that acute exposure to low concentration of calcium does not decrease the magnitude of alternans and sustained ventricular Fibrillation (VF) is still easily induced under these condition. However with prolonged exposure to low concentration of calcium, alternans disappears, but VF is still inducible. This work is based on this observation and tries to make it clearer. The aim of this thesis is investigate the effect of hypocalcemia spatial alternans and VF doing experiments with canine hearts and perfusing them with a solution with physiological ionic concentration and with a solution with low calcium concentration (hypocalcemia); in order to investigate the so called memory effect, the experimental activity was modified during the way. The experiments were performed with the optical mapping technique, using voltage-sensitive dye, and a custom made Java code was used in post-processing. Finding the Nolasco and Dahlen’s criterion [8] inadequate for the prediction of alternans, and takin into account the experimental results, another criterion, which consider the memory effect, has been implemented. The implementation of this criterion could be the first step in the creation of a method, AP-based, discriminating who is at risk if developing VF. This work is divided into four chapters: the first is a brief presentation of the physiology of the heart; the second is a review of the major theories and discovers in the study of cardiac dynamics; the third chapter presents an overview on the experimental activity and the optical mapping technique; the forth chapter contains the presentation of the results and the conclusions.
Resumo:
Nella presente tesi è proposta una metodologia per lo studio e la valutazione del comportamento sismico di edifici a telaio. Il metodo prevede la realizzazione di analisi non-lineari su modelli equivalenti MDOF tipo stick, in accordo alla classificazione data nel report FEMA 440. Gli step per l’applicazione del metodo sono descritti nella tesi. Per la validazione della metodologia si sono utilizzati confronti con analisi time-history condotte su modelli tridimensionali dettagliati delle strutture studiate (detailed model). I parametri ingegneristici considerati nel confronto, nell’ottica di utilizzare il metodo proposto in un approccio del tipo Displacement-Based Design sono lo spostamento globale in sommità, gli spostamenti di interpiano, le forze di piano e la forza totale alla base. I risultati delle analisi condotte sui modelli stick equivalenti, mostrano una buona corrispondenza, ottima in certi casi, con quelli delle analisi condotte sui modelli tridimensionali dettagliati. Le time-history realizzate sugli stick model permettono però, un consistente risparmio in termini di onere computazionale e di tempo per il post-processing dei risultati ottenuti.
Resumo:
The aim of my thesis is to parallelize the Weighting Histogram Analysis Method (WHAM), which is a popular algorithm used to calculate the Free Energy of a molucular system in Molecular Dynamics simulations. WHAM works in post processing in cooperation with another algorithm called Umbrella Sampling. Umbrella Sampling has the purpose to add a biasing in the potential energy of the system in order to force the system to sample a specific region in the configurational space. Several N independent simulations are performed in order to sample all the region of interest. Subsequently, the WHAM algorithm is used to estimate the original system energy starting from the N atomic trajectories. The parallelization of WHAM has been performed through CUDA, a language that allows to work in GPUs of NVIDIA graphic cards, which have a parallel achitecture. The parallel implementation may sensibly speed up the WHAM execution compared to previous serial CPU imlementations. However, the WHAM CPU code presents some temporal criticalities to very high numbers of interactions. The algorithm has been written in C++ and executed in UNIX systems provided with NVIDIA graphic cards. The results were satisfying obtaining an increase of performances when the model was executed on graphics cards with compute capability greater. Nonetheless, the GPUs used to test the algorithm is quite old and not designated for scientific calculations. It is likely that a further performance increase will be obtained if the algorithm would be executed in clusters of GPU at high level of computational efficiency. The thesis is organized in the following way: I will first describe the mathematical formulation of Umbrella Sampling and WHAM algorithm with their apllications in the study of ionic channels and in Molecular Docking (Chapter 1); then, I will present the CUDA architectures used to implement the model (Chapter 2); and finally, the results obtained on model systems will be presented (Chapter 3).
Resumo:
Die oberflächennahe Geothermie leistet im Bereich der Nutzung regenerativer Wärme einen wichtigen Beitrag zum Klima- und Umweltschutz. Um die technische Nutzung oberflächennaher Geothermie zu optimieren, ist die Kenntnis der Beschaffenheit des geologischen Untergrundes ausschlaggebend. Die vorliegende Dissertation befasst sich mit der Bestimmung verschiedener Untergrundparameter an einem Erdwärmesondenfeld. Es wurden Untersuchungen zur Bestimmung der Wärmeleitfähigkeit wie der enhanced Thermal Response Test (eTRT), sowie eine Untergrund-Temperaturüberwachung im ersten Betriebsjahr durchgeführt. Die Überwachung zeigte keine gegenseitige Beeinflussung einzelner Sonden. Ein Vergleich zwischen dem geplanten und dem tatsächlichem Wärmebedarf des ersten Betriebsjahres ergab eine Abweichung von ca. 35%. Dies zeigt, dass die Nutzungsparameter der Anlage deren Effizienz maßgeblich beeinflussen können. Der am Beispielobjekt praktisch durchgeführte eTRT wurde mittels numerischer Modellierung auf seine Reproduzierbarkeit hin überprüft. Bei einem rein konduktiven Wärmetransport im Untergrund betrug die maximale Abweichung der Messung selbst unter ungünstigen Bedingungen lediglich ca. 6% vom zu erwartenden Wert. Die Detektion von grundwasserdurchflossenen Schichten ist in den Modellen ebenfalls gut abbildbar. Problematisch bleibt die hohe Abhängigkeit des Tests von einer konstanten Wärmezufuhr. Lediglich die Bestimmung der Wärmeleitfähigkeit über das Relaxationsverhalten des Untergrundes liefert bei Wärmeeintragsschwankungen hinreichend genaue Ergebnisse. Die mathematische Nachbearbeitung von fehlerhaften Temperaturkurven bietet einen Einstiegspunkt für weiterführende Forschung.