975 resultados para post-processing
Resumo:
The underground scenarios are one of the most challenging environments for accurate and precise 3d mapping where hostile conditions like absence of Global Positioning Systems, extreme lighting variations and geometrically smooth surfaces may be expected. So far, the state-of-the-art methods in underground modelling remain restricted to environments in which pronounced geometric features are abundant. This limitation is a consequence of the scan matching algorithms used to solve the localization and registration problems. This paper contributes to the expansion of the modelling capabilities to structures characterized by uniform geometry and smooth surfaces, as is the case of road and train tunnels. To achieve that, we combine some state of the art techniques from mobile robotics, and propose a method for 6DOF platform positioning in such scenarios, that is latter used for the environment modelling. A visual monocular Simultaneous Localization and Mapping (MonoSLAM) approach based on the Extended Kalman Filter (EKF), complemented by the introduction of inertial measurements in the prediction step, allows our system to localize himself over long distances, using exclusively sensors carried on board a mobile platform. By feeding the Extended Kalman Filter with inertial data we were able to overcome the major problem related with MonoSLAM implementations, known as scale factor ambiguity. Despite extreme lighting variations, reliable visual features were extracted through the SIFT algorithm, and inserted directly in the EKF mechanism according to the Inverse Depth Parametrization. Through the 1-Point RANSAC (Random Sample Consensus) wrong frame-to-frame feature matches were rejected. The developed method was tested based on a dataset acquired inside a road tunnel and the navigation results compared with a ground truth obtained by post-processing a high grade Inertial Navigation System and L1/L2 RTK-GPS measurements acquired outside the tunnel. Results from the localization strategy are presented and analyzed.
Resumo:
A presente tese tem como principal objetivo a comparação entre dois software de CFD (Computer Fluid Dynamics) na simulação de escoamentos atmosféricos com vista à sua aplicação ao estudo e caracterização de parques eólicos. O software em causa são o OpenFOAM (Open Field Operation and Manipulation) - freeware open source genérico - e o Windie, ferramenta especializada no estudo de parques eólicos. Para este estudo foi usada a topografia circundante a um parque eólico situado na Grécia, do qual dispúnhamos de resultados de uma campanha de medições efetuada previamente. Para este _m foram usados procedimentos e ferramentas complementares ao Open-FOAM, desenvolvidas por da Silva Azevedo (2013) adequados para a realização do pré-processamento, extração de dados e pós-processamento, aplicados na simulação do caso pratico. As condições de cálculo usadas neste trabalho limitaram-se às usadas na simulação de escoamentos previamente simulados pelo software Windie: condições de escoamento turbulento, estacionário, incompressível e em regime não estratificado, com o recurso ao modelo de turbulência RaNS (Reynolds-averaged Navier-Stokes ) k - E atmosférico. Os resultados de ambas as simulações - OpenFOAM e Windie - foram comparados com resultados de uma campanha de medições, através dos valores de speed-up e intensidade turbulenta nas posições dos anemómetros.
Resumo:
Dissertação de mestrado integrado em Engenharia Biomédica (área de especialização em Eletrónica Médica)
Resumo:
Synchronization of data coming from different sources is of high importance in biomechanics to ensure reliable analyses. This synchronization can either be performed through hardware to obtain perfect matching of data, or post-processed digitally. Hardware synchronization can be achieved using trigger cables connecting different devices in many situations; however, this is often impractical, and sometimes impossible in outdoors situations. The aim of this paper is to describe a wireless system for outdoor use, allowing synchronization of different types of - potentially embedded and moving - devices. In this system, each synchronization device is composed of: (i) a GPS receiver (used as time reference), (ii) a radio transmitter, and (iii) a microcontroller. These components are used to provide synchronized trigger signals at the desired frequency to the measurement device connected. The synchronization devices communicate wirelessly, are very lightweight, battery-operated and thus very easy to set up. They are adaptable to every measurement device equipped with either trigger input or recording channel. The accuracy of the system was validated using an oscilloscope. The mean synchronization error was found to be 0.39 μs and pulses are generated with an accuracy of <2 μs. The system provides synchronization accuracy about two orders of magnitude better than commonly used post-processing methods, and does not suffer from any drift in trigger generation.
Resumo:
Introduction: A standardized three-dimensional ultrasonographic (3DUS) protocol is described that allows fetal face reconstruction. Ability to identify cleft lip with 3DUS using this protocol was assessed by operators with minimal 3DUS experience. Material and Methods: 260 stored volumes of fetal face were analyzed using a standardized protocol by operators with different levels of competence in 3DUS. The outcomes studied were: (1) the performance of post-processing 3D face volumes for the detection of facial clefts; (2) the ability of a resident with minimal 3DUS experience to reconstruct the acquired facial volumes, and (3) the time needed to reconstruct each plane to allow proper diagnosis of a cleft. Results: The three orthogonal planes of the fetal face (axial, sagittal and coronal) were adequately reconstructed with similar performance when acquired by a maternal-fetal medicine specialist or by residents with minimal experience (72 vs. 76%, p = 0.629). The learning curve for manipulation of 3DUS volumes of the fetal face corresponds to 30 cases and is independent of the operator's level of experience. Discussion: The learning curve for the standardized protocol we describe is short, even for inexperienced sonographers. This technique might decrease the length of anatomy ultrasounds and improve the ability to visualize fetal face anomalies.
Resumo:
Recent advances in CT technologies had significantly improved the clinical utility of cardiac CT. Major efforts have been made to optimize the image quality, standardize protocols and limit the radiation exposure. Rapid progress in post-processing tools dedicated not only to the coronary artery assessment but also to the cardiac cavities, valves and veins extended applications of cardiac CT. This potential might be however used optimally considering the current appropriate indications for use as well as the current technical imitations. Coronary artery disease and related ischemic cardiomyopathy remain the major applications of cardiac CT and at the same time the most complex one. Integration of a specific knowledge is mandatory for optimal use in this area for asymptomatic as for symptomatic patients, with a specific regards to patient with acute chest pain. This review aimed to propose a practical approach to implement appropriate indications in our routine practice. Emerging indications and future direction are also discussed. Adequate preparation of the patient, training of physicians, and the multidisciplinary interaction between actors are the key of successful implementation of cardiac CT in daily practice.
Resumo:
Background: Event-related potentials (ERPs) may be used as a highly sensitive way of detecting subtle degrees of cognitive dysfunction. On the other hand, impairment of cognitive skills is increasingly recognised as a hallmark of patients suffering from multiple sclerosis (MS). We sought to determine the psychophysiological pattern of information processing among MS patients with the relapsing-remitting form of the disease and low physical disability considered as two subtypes: 'typical relapsing-remitting' (RRMS) and 'benign MS' (BMS). Furthermore, we subjected our data to a cluster analysis to determine whether MS patients and healthy controls could be differentiated in terms of their psychophysiological profile.Methods: We investigated MS patients with RRMS and BMS subtypes using event-related potentials (ERPs) acquired in the context of a Posner visual-spatial cueing paradigm. Specifically, our study aimed to assess ERP brain activity in response preparation (contingent negative variation -CNV) and stimuli processing in MS patients. Latency and amplitude of different ERP components (P1, eN1, N1, P2, N2, P3 and late negativity -LN) as well as behavioural responses (reaction time -RT; correct responses -CRs; and number of errors) were analyzed and then subjected to cluster analysis. Results: Both MS groups showed delayed behavioural responses and enhanced latency for long-latency ERP components (P2, N2, P3) as well as relatively preserved ERP amplitude, but BMS patients obtained more important performance deficits (lower CRs and higher RTs) and abnormalities related to the latency (N1, P3) and amplitude of ERPs (eCNV, eN1, LN). However, RRMS patients also demonstrated abnormally high amplitudes related to the preparation performance period of CNV (cCNV) and post-processing phase (LN). Cluster analyses revealed that RRMS patients appear to make up a relatively homogeneous group with moderate deficits mainly related to ERP latencies, whereas BMS patients appear to make up a rather more heterogeneous group with more severe information processing and attentional deficits. Conclusions: Our findings are suggestive of a slowing of information processing for MS patients that may be a consequence of demyelination and axonal degeneration, which also seems to occur in MS patients that show little or no progression in the physical severity of the disease over time.
Resumo:
L’aplicació de tecnologies innovadores per a l’anàlisi de la qualitat (proteòmica) i per al processat de productes carnis (envasament actiu i altes pressions hidrostàtiques) amb la finalitat d’optimitzar la qualitat i la seguretat de productes carnis llestos per al consum fou evaluat. Els resultats obtinguts amb l’anàlisi proteòmic van permetre la detecció de pèptids/ proteïnes candidats a marcadors proteics de la qualitat dels lloms i dels pernils. La detecció d’aquests marcadors a la matèria primera (llom i pernil fresc) ajudaria a predir la qualitat final dels productes carnis processats (llom cuit i pernil curat), i proporcionaria una eina per al control de la qualitat de la carn de porc. No obstant, la validació del paper d’aquestes proteïnes a la qualitat final dels productes carnis és necessària abans de poder-los considerar marcadors proteics. Per altra banda, es va estudiar la possiblitat de millorar la seguretat alimentària de llonganissa sense sal afegida obtinguda amb el procés QDS® process a través l’ús de tecnologies innovadores (envasament actiu i altes pressions hidrostàtiques). La llonganissa sense sal afegida no va permetre el creixement de L. monocytogenes. No obstant, el patogen seria capaç de sobreviure durant la vida útil del producte en cas de recontaminació. L’envasament antimicrobià amb la inclusió de nisina com a antimicrobià natural es pot considerar un mètode efectiu per a millorar la seguretat de la llonganissa estudiada. L. monocytogenes va sobreviure al tractament d’alta pressió hidrostàtica (600 MPa, 5 min, 12ºC) gràcies a les característiques del producte de baixa activitat d’aigua i presència de lactat a la seva formulació. Per aquest motiu, la APH no es consideraria un tractament apropiat per a reduir la presència de L. monocytogenes en aquest tipus de producte.
Resumo:
We propose an edge detector based on the selection of wellcontrasted pieces of level lines, following the proposal ofDesolneux-Moisan-Morel (DMM) [1]. The DMM edge detectorhas the problem of over-representation, that is, everyedge is detected several times in slightly different positions.In this paper we propose two modifications of the originalDMM edge detector in order to solve this problem. The firstmodification is a post-processing of the output using a generalmethod to select the best representative of a bundle of curves.The second modification is the use of Canny’s edge detectorinstead of the norm of the gradient to build the statistics. Thetwo modifications are independent and can be applied separately.Elementary reasoning and some experiments showthat the best results are obtained when both modifications areapplied together.
Resumo:
Three-dimensional analysis of the entire sequence in ski jumping is recommended when studying the kinematics or evaluating performance. Camera-based systems which allow three-dimensional kinematics measurement are complex to set-up and require extensive post-processing, usually limiting ski jumping analyses to small numbers of jumps. In this study, a simple method using a wearable inertial sensors-based system is described to measure the orientation of the lower-body segments (sacrum, thighs, shanks) and skis during the entire jump sequence. This new method combines the fusion of inertial signals and biomechanical constraints of ski jumping. Its performance was evaluated in terms of validity and sensitivity to different performances based on 22 athletes monitored during daily training. The validity of the method was assessed by comparing the inclination of the ski and the slope at landing point and reported an error of -0.2±4.8°. The validity was also assessed by comparison of characteristic angles obtained with the proposed system and reference values in the literature; the differences were smaller than 6° for 75% of the angles and smaller than 15° for 90% of the angles. The sensitivity to different performances was evaluated by comparing the angles between two groups of athletes with different jump lengths and by assessing the association between angles and jump lengths. The differences of technique observed between athletes and the associations with jumps length agreed with the literature. In conclusion, these results suggest that this system is a promising tool for a generalization of three-dimensional kinematics analysis in ski jumping.
Resumo:
Two-dimensional (2D)-breath-hold coronary magnetic resonance angiography (MRA) has been shown to be a fast and reliable method to depict the proximal coronary arteries. Recent developments, however, allow for free-breathing navigator gated and navigator corrected three-dimensional (3D) coronary MRA. These 3D approaches have potential for improved signal-to-noise ratio (SNR) and allow for the acquisition of adjacent thin slices without the misregistration problems known from 2D approaches. Still, a major impediment of a 3D acquisition is the increased scan time. The purpose of this study was the implementation of a free-breathing navigator gated and corrected ultra-fast 3D coronary MRA technique, which allows for scan times of less than 5 minutes. Twelve healthy adult subjects were examined in the supine position using a navigator gated and corrected ECG triggered ultra-fast 3D interleaved gradient echo planar imaging sequence (TFE-EPI). A 3D slab, consisting of 20 slices with a reconstructed slice thickness of 1.5 mm, was acquired with free-breathing. The diastolic TFE-EPI acquisition block was preceded by a T2prep pre-pulse, a diaphragmatic navigator pulse, and a fat suppression pre-pulse. With a TR of 19 ms and an effective TE of 5.4 ms, the duration of the data acquisition window duration was 38 ms. The in-plane spatial resolution was 1.0-1.3 mm*1.5-1.9 mm. In all cases, the entire left main (LM) and extensive portions of the left anterior descending (LAD) and right coronary artery (RCA) could be visualized with an average scan time for the entire 3D-volume data set of 2:57 +/- 0:51 minutes. Average contiguous vessel length visualized was 53 +/- 11 mm (range: 42 to 75 mm) for the LAD and 84 +/- 14 mm (range: 62 to 112 mm) for the RCA. Contrast-to-noise between coronary blood and myocardium was 5.0 +/- 2.3 for the LM/LAD and 8.0 +/- 2.9 for the RCA, resulting in an excellent suppression of myocardium. We present a new approach for free-breathing 3D coronary MRA, which allows for scan times superior to corresponding 2D coronary MRA approaches, and which takes advantage of the enhanced SNR of 3D acquisitions and the post-processing benefits of thin adjacent slices. The robust image quality and the short average scanning time suggest that this approach may be useful for screening the major coronary arteries or identification of anomalous coronary arteries. J. Magn. Reson. Imaging 1999;10:821-825.
Resumo:
The purpose of this project was to investigate the potential for collecting and using data from mobile terrestrial laser scanning (MTLS) technology that would reduce the need for traditional survey methods for the development of highway improvement projects at the Iowa Department of Transportation (Iowa DOT). The primary interest in investigating mobile scanning technology is to minimize the exposure of field surveyors to dangerous high volume traffic situations. Issues investigated were cost, timeframe, accuracy, contracting specifications, data capture extents, data extraction capabilities and data storage issues associated with mobile scanning. The project area selected for evaluation was the I-35/IA 92 interchange in Warren County, Iowa. This project covers approximately one mile of I-35, one mile of IA 92, 4 interchange ramps, and bridges within these limits. Delivered LAS and image files for this project totaled almost 31GB. There is nearly a 6-fold increase in the size of the scan data after post-processing. Camera data, when enabled, produced approximately 900MB of imagery data per mile using a 2- camera, 5 megapixel system. A comparison was done between 1823 points on the pavement that were surveyed by Iowa DOT staff using a total station and the same points generated through the MTLS process. The data acquired through the MTLS and data processing met the Iowa DOT specifications for engineering survey. A list of benefits and challenges is included in the detailed report. With the success of this project, it is anticipate[d] that additional projects will be scanned for the Iowa DOT for use in the development of highway improvement projects.
Resumo:
Introduction: A standardized three-dimensional ultrasonographic (3DUS) protocol is described that allows fetal face reconstruction. Ability to identify cleft lip with 3DUS using this protocol was assessed by operators with minimal 3DUS experience. Material and Methods: 260 stored volumes of fetal face were analyzed using a standardized protocol by operators with different levels of competence in 3DUS. The outcomes studied were: (1) the performance of post-processing 3D face volumes for the detection of facial clefts; (2) the ability of a resident with minimal 3DUS experience to reconstruct the acquired facial volumes, and (3) the time needed to reconstruct each plane to allow proper diagnosis of a cleft. Results: The three orthogonal planes of the fetal face (axial, sagittal and coronal) were adequately reconstructed with similar performance when acquired by a maternal-fetal medicine specialist or by residents with minimal experience (72 vs. 76%, p = 0.629). The learning curve for manipulation of 3DUS volumes of the fetal face corresponds to 30 cases and is independent of the operator's level of experience. Discussion: The learning curve for the standardized protocol we describe is short, even for inexperienced sonographers. This technique might decrease the length of anatomy ultrasounds and improve the ability to visualize fetal face anomalies.
Resumo:
Introduction: The field of Connectomic research is growing rapidly, resulting from methodological advances in structural neuroimaging on many spatial scales. Especially progress in Diffusion MRI data acquisition and processing made available macroscopic structural connectivity maps in vivo through Connectome Mapping Pipelines (Hagmann et al, 2008) into so-called Connectomes (Hagmann 2005, Sporns et al, 2005). They exhibit both spatial and topological information that constrain functional imaging studies and are relevant in their interpretation. The need for a special-purpose software tool for both clinical researchers and neuroscientists to support investigations of such connectome data has grown. Methods: We developed the ConnectomeViewer, a powerful, extensible software tool for visualization and analysis in connectomic research. It uses the novel defined container-like Connectome File Format, specifying networks (GraphML), surfaces (Gifti), volumes (Nifti), track data (TrackVis) and metadata. Usage of Python as programming language allows it to by cross-platform and have access to a multitude of scientific libraries. Results: Using a flexible plugin architecture, it is possible to enhance functionality for specific purposes easily. Following features are already implemented: * Ready usage of libraries, e.g. for complex network analysis (NetworkX) and data plotting (Matplotlib). More brain connectivity measures will be implemented in a future release (Rubinov et al, 2009). * 3D View of networks with node positioning based on corresponding ROI surface patch. Other layouts possible. * Picking functionality to select nodes, select edges, get more node information (ConnectomeWiki), toggle surface representations * Interactive thresholding and modality selection of edge properties using filters * Arbitrary metadata can be stored for networks, thereby allowing e.g. group-based analysis or meta-analysis. * Python Shell for scripting. Application data is exposed and can be modified or used for further post-processing. * Visualization pipelines using filters and modules can be composed with Mayavi (Ramachandran et al, 2008). * Interface to TrackVis to visualize track data. Selected nodes are converted to ROIs for fiber filtering The Connectome Mapping Pipeline (Hagmann et al, 2008) processed 20 healthy subjects into an average Connectome dataset. The Figures show the ConnectomeViewer user interface using this dataset. Connections are shown that occur in all 20 subjects. The dataset is freely available from the homepage (connectomeviewer.org). Conclusions: The ConnectomeViewer is a cross-platform, open-source software tool that provides extensive visualization and analysis capabilities for connectomic research. It has a modular architecture, integrates relevant datatypes and is completely scriptable. Visit www.connectomics.org to get involved as user or developer.
Resumo:
The Iowa DOT has been using the AASHTO Present Serviceability Index (PSI) rating procedure since 1968 to rate the condition of pavement sections. A ride factor and a cracking and patching factor make up the PSI value. Crack and patch surveys have been done by sending crews out to measure and record the distress. Advances in video equipment and computers make it practical to videotape roads and do the crack and patch measurements in the office. The objective of the study was to determine the feasibility of converting the crack and patch survey operation to a video recording system with manual post processing. The summary and conclusions are as follows: Video crack and patch surveying is a feasible alternative to the current crack and patch procedure. The cost per mile should be about 25 percent less than the current procedure. More importantly, the risk of accidents is reduced by getting the people and vehicles off the roadway and shoulder. Another benefit is the elimination of the negative public perceptions of the survey crew on the shoulder.