994 resultados para volume algorithm
Resumo:
Lifelong surveillance is not cost-effective after endovascular aneurysm repair (EVAR), but is required to detect aortic complications which are fatal if untreated (type 1/3 endoleak, sac expansion, device migration). Aneurysm morphology determines the probability of aortic complications and therefore the need for surveillance, but existing analyses have proven incapable of identifying patients at sufficiently low risk to justify abandoning surveillance. This study aimed to improve the prediction of aortic complications, through the application of machine-learning techniques. Patients undergoing EVAR at 2 centres were studied from 2004–2010. Aneurysm morphology had previously been studied to derive the SGVI Score for predicting aortic complications. Bayesian Neural Networks were designed using the same data, to dichotomise patients into groups at low- or high-risk of aortic complications. Network training was performed only on patients treated at centre 1. External validation was performed by assessing network performance independently of network training, on patients treated at centre 2. Discrimination was assessed by Kaplan-Meier analysis to compare aortic complications in predicted low-risk versus predicted high-risk patients. 761 patients aged 75 +/− 7 years underwent EVAR in 2 centres. Mean follow-up was 36+/− 20 months. Neural networks were created incorporating neck angu- lation/length/diameter/volume; AAA diameter/area/volume/length/tortuosity; and common iliac tortuosity/diameter. A 19-feature network predicted aor- tic complications with excellent discrimination and external validation (5-year freedom from aortic complications in predicted low-risk vs predicted high-risk patients: 97.9% vs. 63%; p < 0.0001). A Bayesian Neural-Network algorithm can identify patients in whom it may be safe to abandon surveillance after EVAR. This proposal requires prospective study.
Resumo:
Traffic incidents are non-recurring events that can cause a temporary reduction in roadway capacity. They have been recognized as a major contributor to traffic congestion on our nation’s highway systems. To alleviate their impacts on capacity, automatic incident detection (AID) has been applied as an incident management strategy to reduce the total incident duration. AID relies on an algorithm to identify the occurrence of incidents by analyzing real-time traffic data collected from surveillance detectors. Significant research has been performed to develop AID algorithms for incident detection on freeways; however, similar research on major arterial streets remains largely at the initial stage of development and testing. This dissertation research aims to identify design strategies for the deployment of an Artificial Neural Network (ANN) based AID algorithm for major arterial streets. A section of the US-1 corridor in Miami-Dade County, Florida was coded in the CORSIM microscopic simulation model to generate data for both model calibration and validation. To better capture the relationship between the traffic data and the corresponding incident status, Discrete Wavelet Transform (DWT) and data normalization were applied to the simulated data. Multiple ANN models were then developed for different detector configurations, historical data usage, and the selection of traffic flow parameters. To assess the performance of different design alternatives, the model outputs were compared based on both detection rate (DR) and false alarm rate (FAR). The results show that the best models were able to achieve a high DR of between 90% and 95%, a mean time to detect (MTTD) of 55-85 seconds, and a FAR below 4%. The results also show that a detector configuration including only the mid-block and upstream detectors performs almost as well as one that also includes a downstream detector. In addition, DWT was found to be able to improve model performance, and the use of historical data from previous time cycles improved the detection rate. Speed was found to have the most significant impact on the detection rate, while volume was found to contribute the least. The results from this research provide useful insights on the design of AID for arterial street applications.
Resumo:
Traffic incidents are non-recurring events that can cause a temporary reduction in roadway capacity. They have been recognized as a major contributor to traffic congestion on our national highway systems. To alleviate their impacts on capacity, automatic incident detection (AID) has been applied as an incident management strategy to reduce the total incident duration. AID relies on an algorithm to identify the occurrence of incidents by analyzing real-time traffic data collected from surveillance detectors. Significant research has been performed to develop AID algorithms for incident detection on freeways; however, similar research on major arterial streets remains largely at the initial stage of development and testing. This dissertation research aims to identify design strategies for the deployment of an Artificial Neural Network (ANN) based AID algorithm for major arterial streets. A section of the US-1 corridor in Miami-Dade County, Florida was coded in the CORSIM microscopic simulation model to generate data for both model calibration and validation. To better capture the relationship between the traffic data and the corresponding incident status, Discrete Wavelet Transform (DWT) and data normalization were applied to the simulated data. Multiple ANN models were then developed for different detector configurations, historical data usage, and the selection of traffic flow parameters. To assess the performance of different design alternatives, the model outputs were compared based on both detection rate (DR) and false alarm rate (FAR). The results show that the best models were able to achieve a high DR of between 90% and 95%, a mean time to detect (MTTD) of 55-85 seconds, and a FAR below 4%. The results also show that a detector configuration including only the mid-block and upstream detectors performs almost as well as one that also includes a downstream detector. In addition, DWT was found to be able to improve model performance, and the use of historical data from previous time cycles improved the detection rate. Speed was found to have the most significant impact on the detection rate, while volume was found to contribute the least. The results from this research provide useful insights on the design of AID for arterial street applications.
Resumo:
In oil and gas pipeline operations, the gas, oil, and water phases simultaneously move through pipe systems. The mixture cools as it flows through subsea pipelines, and forms a hydrate formation region, where the hydrate crystals start to grow and may eventually block the pipeline. The potential of pipe blockage due to hydrate formation is one of the most significant flow-assurance problems in deep-water subsea operations. Due to the catastrophic safety and economic implications of hydrate blockage, it is important to accurately predict the simultaneous flow of gas, water, and hydrate particles in flowlines. Currently, there are few or no studies that account for the simultaneous effects of hydrate growth and heat transfer on flow characteristics within pipelines. This thesis presents new and more accurate predictive models of multiphase flows in undersea pipelines to describe the simultaneous flow of gas, water, and hydrate particles through a pipeline. A growth rate model for the hydrate phase is presented and then used in the development of a new three-phase model. The conservation equations of mass, momentum, and energy are formulated to describe the physical phenomena of momentum and heat transfer between the fluid and the wall. The governing equations are solved based on an analytical-numerical approach using a Newton-Raphson method for the nonlinear equations. An algorithm was developed in Matlab software to solve the equations from the inlet to the outlet of the pipeline. The developed models are validated against a single-phase model with mixture properties, and the results of comparative studies show close agreement. The new model predicts the volume fraction and velocity of each phase, as well as the mixture pressure and temperature profiles along the length of the pipeline. The results from the hydrate growth model reveal the growth rate and location where the initial hydrates start to form. Finally, to assess the impact of certain parameters on the flow characteristics, parametric studies have been conducted. The results show the effect of a variation in the pipe diameter, mass flow rate, inlet pressure, and inlet temperature on the flow characteristics and hydrate growth rates.
Resumo:
A method of accurately controlling the position of a mobile robot using an external Large Volume Metrology (LVM) instrument is presented in this paper. Utilizing a LVM instrument such as the laser tracker in mobile robot navigation, many of the most difficult problems in mobile robot navigation can be simplified or avoided. Using the real- Time position information from the laser tracker, a very simple navigation algorithm, and a low cost robot, 5mm repeatability was achieved over a volume of 30m radius. A surface digitization scan of a wind turbine blade section was also demonstrated, illustrating possible applications of the method for manufacturing processes. © Springer-Verlag Berlin Heidelberg 2010.
Resumo:
Purpose: Computed Tomography (CT) is one of the standard diagnostic imaging modalities for the evaluation of a patient’s medical condition. In comparison to other imaging modalities such as Magnetic Resonance Imaging (MRI), CT is a fast acquisition imaging device with higher spatial resolution and higher contrast-to-noise ratio (CNR) for bony structures. CT images are presented through a gray scale of independent values in Hounsfield units (HU). High HU-valued materials represent higher density. High density materials, such as metal, tend to erroneously increase the HU values around it due to reconstruction software limitations. This problem of increased HU values due to metal presence is referred to as metal artefacts. Hip prostheses, dental fillings, aneurysm clips, and spinal clips are a few examples of metal objects that are of clinical relevance. These implants create artefacts such as beam hardening and photon starvation that distort CT images and degrade image quality. This is of great significance because the distortions may cause improper evaluation of images and inaccurate dose calculation in the treatment planning system. Different algorithms are being developed to reduce these artefacts for better image quality for both diagnostic and therapeutic purposes. However, very limited information is available about the effect of artefact correction on dose calculation accuracy. This research study evaluates the dosimetric effect of metal artefact reduction algorithms on severe artefacts on CT images. This study uses Gemstone Spectral Imaging (GSI)-based MAR algorithm, projection-based Metal Artefact Reduction (MAR) algorithm, and the Dual-Energy method.
Materials and Methods: The Gemstone Spectral Imaging (GSI)-based and SMART Metal Artefact Reduction (MAR) algorithms are metal artefact reduction protocols embedded in two different CT scanner models by General Electric (GE), and the Dual-Energy Imaging Method was developed at Duke University. All three approaches were applied in this research for dosimetric evaluation on CT images with severe metal artefacts. The first part of the research used a water phantom with four iodine syringes. Two sets of plans, multi-arc plans and single-arc plans, using the Volumetric Modulated Arc therapy (VMAT) technique were designed to avoid or minimize influences from high-density objects. The second part of the research used projection-based MAR Algorithm and the Dual-Energy Method. Calculated Doses (Mean, Minimum, and Maximum Doses) to the planning treatment volume (PTV) were compared and homogeneity index (HI) calculated.
Results: (1) Without the GSI-based MAR application, a percent error between mean dose and the absolute dose ranging from 3.4-5.7% per fraction was observed. In contrast, the error was decreased to a range of 0.09-2.3% per fraction with the GSI-based MAR algorithm. There was a percent difference ranging from 1.7-4.2% per fraction between with and without using the GSI-based MAR algorithm. (2) A range of 0.1-3.2% difference was observed for the maximum dose values, 1.5-10.4% for minimum dose difference, and 1.4-1.7% difference on the mean doses. Homogeneity indexes (HI) ranging from 0.068-0.065 for dual-energy method and 0.063-0.141 with projection-based MAR algorithm were also calculated.
Conclusion: (1) Percent error without using the GSI-based MAR algorithm may deviate as high as 5.7%. This error invalidates the goal of Radiation Therapy to provide a more precise treatment. Thus, GSI-based MAR algorithm was desirable due to its better dose calculation accuracy. (2) Based on direct numerical observation, there was no apparent deviation between the mean doses of different techniques but deviation was evident on the maximum and minimum doses. The HI for the dual-energy method almost achieved the desirable null values. In conclusion, the Dual-Energy method gave better dose calculation accuracy to the planning treatment volume (PTV) for images with metal artefacts than with or without GE MAR Algorithm.
Resumo:
A three-dimensional finite volume, unstructured mesh (FV-UM) method for dynamic fluid–structure interaction (DFSI) is described. Fluid structure interaction, as applied to flexible structures, has wide application in diverse areas such as flutter in aircraft, wind response of buildings, flows in elastic pipes and blood vessels. It involves the coupling of fluid flow and structural mechanics, two fields that are conventionally modelled using two dissimilar methods, thus a single comprehensive computational model of both phenomena is a considerable challenge. Until recently work in this area focused on one phenomenon and represented the behaviour of the other more simply. More recently, strategies for solving the full coupling between the fluid and solid mechanics behaviour have been developed. A key contribution has been made by Farhat et al. [Int. J. Numer. Meth. Fluids 21 (1995) 807] employing FV-UM methods for solving the Euler flow equations and a conventional finite element method for the elastic solid mechanics and the spring based mesh procedure of Batina [AIAA paper 0115, 1989] for mesh movement. In this paper, we describe an approach which broadly exploits the three field strategy described by Farhat for fluid flow, structural dynamics and mesh movement but, in the context of DFSI, contains a number of novel features: a single mesh covering the entire domain, a Navier–Stokes flow, a single FV-UM discretisation approach for both the flow and solid mechanics procedures, an implicit predictor–corrector version of the Newmark algorithm, a single code embedding the whole strategy.
Resumo:
This paper combines the idea of a hierarchical distributed genetic algorithm with different inter-agent partnering strategies. Cascading clusters of sub-populations are built from bottom up, with higher-level sub-populations optimising larger parts of the problem. Hence higher-level sub-populations search a larger search space with a lower resolution whilst lower-level sub-populations search a smaller search space with a higher resolution. The effects of different partner selection schemes amongst the agents on solution quality are examined for two multiple-choice optimisation problems. It is shown that partnering strategies that exploit problem-specific knowledge are superior and can counter inappropriate (sub-) fitness measurements.
Resumo:
This paper combines the idea of a hierarchical distributed genetic algorithm with different inter-agent partnering strategies. Cascading clusters of sub-populations are built from bottom up, with higher-level sub-populations optimising larger parts of the problem. Hence higher-level sub-populations search a larger search space with a lower resolution whilst lower-level sub-populations search a smaller search space with a higher resolution. The effects of different partner selection schemes amongst the agents on solution quality are examined for two multiple-choice optimisation problems. It is shown that partnering strategies that exploit problem-specific knowledge are superior and can counter inappropriate (sub-) fitness measurements.
Resumo:
The Hybrid Monte Carlo algorithm is adapted to the simulation of a system of classical degrees of freedom coupled to non self-interacting lattices fermions. The diagonalization of the Hamiltonian matrix is avoided by introducing a path-integral formulation of the problem, in d + 1 Euclidean space–time. A perfect action formulation allows to work on the continuum Euclidean time, without need for a Trotter–Suzuki extrapolation. To demonstrate the feasibility of the method we study the Double Exchange Model in three dimensions. The complexity of the algorithm grows only as the system volume, allowing to simulate in lattices as large as 163 on a personal computer. We conclude that the second order paramagnetic–ferromagnetic phase transition of Double Exchange Materials close to half-filling belongs to the Universality Class of the three-dimensional classical Heisenberg model.
Resumo:
Oscillometric blood pressure (BP) monitors are currently used to diagnose hypertension both in home and clinical settings. These monitors take BP measurements once every 15 minutes over a 24 hour period and provide a reliable and accurate system that is minimally invasive. Although intermittent cuff measurements have proven to be a good indicator of BP, a continuous BP monitor is highly desirable for the diagnosis of hypertension and other cardiac diseases. However, no such devices currently exist. A novel algorithm has been developed based on the Pulse Transit Time (PTT) method, which would allow non-invasive and continuous BP measurement. PTT is defined as the time it takes the BP wave to propagate from the heart to a specified point on the body. After an initial BP measurement, PTT algorithms can track BP over short periods of time, known as calibration intervals. After this time has elapsed, a new BP measurement is required to recalibrate the algorithm. Using the PhysioNet database as a basis, the new algorithm was developed and tested using 15 patients, each tested 3 times over a period of 30 minutes. The predicted BP of the algorithm was compared to the arterial BP of each patient. It has been established that this new algorithm is capable of tracking BP over 12 minutes without the need for recalibration, using the BHS standard, a 100% improvement over what has been previously identified. The algorithm was incorporated into a new system based on its requirements and was tested using three volunteers. The results mirrored those previously observed, providing accurate BP measurements when a 12 minute calibration interval was used. This new system provides a significant improvement to the existing method allowing BP to be monitored continuously and non-invasively, on a beat-to-beat basis over 24 hours, adding major clinical and diagnostic value.
Resumo:
This thesis focuses on finding the optimum block cutting dimensions in terms of the environmental and economic factors by using a 3D algorithm for a limestone quarry in Foggia, Italy. The environmental concerns of quarrying operations are mainly: energy consumption, material waste, and pollution. The main economic concerns are the block recovery, the selling prices, and the production costs. Fractures adversely affect the block recovery ratio. With a fracture model, block production can be optimized. In this research, the waste volume produced by quarrying was minimised to increase the recovery ratio and ensure economic benefits. SlabCutOpt is a software developed at DICAM–University of Bologna for block cutting optimization which tests different cutting angles on the x-y-z planes to offer up alternative cutting methods. The program tests several block sizes and outputs the optimal result for each entry. By using SlabCutOpt, ten different block dimensions were analysed, the results indicated the maximum number of non-intersecting blocks for each dimension. After analysing the outputs, the block named number 1 with the dimensions ‘1mx1mx1m’ had the highest recovery ratio as 43% and the total Relative Money Value (RMV) with a value of 22829. Dimension number 1, also had the lowest waste volume, with a value of 3953.25 m3, for the total bench. For cutting the total bench volume of 6932.25m3, the diamond wire cutter had the lowest dust emission values for the block with the dimension ‘2mx2mx2m’, with a value of 24m3. When compared with the Eco-Label standards, block dimensions having surface area values lower than 15m2, were found to fit the natural resource waste criteria of the label, as the threshold required 25% of minimum recovery [1]. Due to the relativity of production costs, together with the Eco-Label threshold, the research recommends the selection of the blocks with a surface area value between 6m2 and 14m2.
Resumo:
The neutrino mass ordering and the leptonic CP violation phase are key parameters of the three-neutrino flavour mixing still to be determined. Measuring these parameters is the main goal of DUNE, a next generation Long Baseline neutrino experiment under construction in the United States. DUNE will feature a Near and a Far Detector site. An important component of the Near detector complex is the SAND apparatus, which will include GRAIN, a novel liquid Argon detector that aims at imaging neutrino interactions using scintillation light. For this purpose, an innovative optical readout system based on Coded Aperture Masks is under study. This thesis work is aimed at a first quantitative assessment of a 3D neutrino event reconstruction algorithm for GRAIN. The processing procedure is optimized and the reconstruction performance is evaluated. Promising results are obtained.
Resumo:
Lipidic mixtures present a particular phase change profile highly affected by their unique crystalline structure. However, classical solid-liquid equilibrium (SLE) thermodynamic modeling approaches, which assume the solid phase to be a pure component, sometimes fail in the correct description of the phase behavior. In addition, their inability increases with the complexity of the system. To overcome some of these problems, this study describes a new procedure to depict the SLE of fatty binary mixtures presenting solid solutions, namely the Crystal-T algorithm. Considering the non-ideality of both liquid and solid phases, this algorithm is aimed at the determination of the temperature in which the first and last crystal of the mixture melts. The evaluation is focused on experimental data measured and reported in this work for systems composed of triacylglycerols and fatty alcohols. The liquidus and solidus lines of the SLE phase diagrams were described by using excess Gibbs energy based equations, and the group contribution UNIFAC model for the calculation of the activity coefficients of both liquid and solid phases. Very low deviations of theoretical and experimental data evidenced the strength of the algorithm, contributing to the enlargement of the scope of the SLE modeling.
Resumo:
This study tested whether myocardial extracellular volume (ECV) is increased in patients with hypertension and atrial fibrillation (AF) undergoing pulmonary vein isolation and whether there is an association between ECV and post-procedural recurrence of AF. Hypertension is associated with myocardial fibrosis, an increase in ECV, and AF. Data linking these findings are limited. T1 measurements pre-contrast and post-contrast in a cardiac magnetic resonance (CMR) study provide a method for quantification of ECV. Consecutive patients with hypertension and recurrent AF referred for pulmonary vein isolation underwent a contrast CMR study with measurement of ECV and were followed up prospectively for a median of 18 months. The endpoint of interest was late recurrence of AF. Patients had elevated left ventricular (LV) volumes, LV mass, left atrial volumes, and increased ECV (patients with AF, 0.34 ± 0.03; healthy control patients, 0.29 ± 0.03; p < 0.001). There were positive associations between ECV and left atrial volume (r = 0.46, p < 0.01) and LV mass and a negative association between ECV and diastolic function (early mitral annular relaxation [E'], r = -0.55, p < 0.001). In the best overall multivariable model, ECV was the strongest predictor of the primary outcome of recurrent AF (hazard ratio: 1.29; 95% confidence interval: 1.15 to 1.44; p < 0.0001) and the secondary composite outcome of recurrent AF, heart failure admission, and death (hazard ratio: 1.35; 95% confidence interval: 1.21 to 1.51; p < 0.0001). Each 10% increase in ECV was associated with a 29% increased risk of recurrent AF. In patients with AF and hypertension, expansion of ECV is associated with diastolic function and left atrial remodeling and is a strong independent predictor of recurrent AF post-pulmonary vein isolation.