932 resultados para Mini-scale method


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Removal of dissolved salts and toxic chemicals in water, especially at a few parts per million (ppm) levels is one of the most difficult problems. There are several methods used for water purification. The choice of the method depends mainly on the level of feed water salinity, source of energy and type of contaminants present. Distillation is an age old method which can remove all types of dissolved impurities from contaminated water. In multiple effect distillation (MED) latent heat of steam is recycled several times to produce many units of distilled water with one unit of primary steam input. This is already being used in large capacity plants for treating sea water. But the challenge lies in designing a system for small scale operations that can treat a few cubic meters of water per day, especially suitable for rural communities where the available water is brackish. A small scale MED unit with an extendable number of effects has been designed and analyzed for optimum yield in terms of total distillate produced. © 2010 Elsevier B.V.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The Analytic Hierarchy Process (AHP) is one of the most popular methods used in Multi-Attribute Decision Making. It provides with ratio-scale measurements of the prioirities of elements on the various leveles of a hierarchy. These priorities are obtained through the pairwise comparisons of elements on one level with reference to each element on the immediate higher level. The Eigenvector Method (EM) and some distance minimizing methods such as the Least Squares Method (LSM), Logarithmic Least Squares Method (LLSM), Weighted Least Squares Method (WLSM) and Chi Squares Method (X2M) are of the tools for computing the priorities of the alternatives. This paper studies a method for generating all the solutions of the LSM problems for 3 × 3 matrices. We observe non-uniqueness and rank reversals by presenting numerical results.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The Deccan Trap basalts are the remnants of a massive series of lava flows that erupted at the K/T boundary and covered 1-2 million km2 of west-central India. This eruptive event is of global interest because of its possible link to the major mass extinction event, and there is much debate about the duration of this massive volcanic event. In contrast to isotopic or paleomagnetic dating methods, I explore an alternative approach to determine the lifecycle of the magma chambers that supplied the lavas, and extend the concept to obtain a tighter constraint on Deccan’s duration. My method relies on extracting time information from elemental and isotopic diffusion across zone boundaries in individual crystals. I determined elemental and Sr-isotopic variations across abnormally large (2-5 cm) plagioclase crystals from the Thalghat and Kashele “Giant Plagioclase Basalts” from the lowermost Jawhar and Igatpuri Formations respectively in the thickest Western Ghats section near Mumbai. I also obtained bulk rock major, trace and rare earth element chemistry of each lava flow from the two formations. Thalghat flows contain only 12% zoned crystals, with 87 Sr/86Sr ratios of 0.7096 in the core and 0.7106 in the rim, separated by a sharp boundary. In contrast, all Kashele crystals have a wider range of 87Sr/86Sr values, with multiple zones. Geochemical modeling of the data suggests that the two types of crystals grew in distinct magmatic environments. Modeling intracrystalline diffusive equilibration between the core and rim of Thalghat crystals led me to obtain a crystal growth rate of 2.03x10-10 cm/s and a residence time of 780 years for the crystals in the magma chamber(s). Employing some assumptions based on field and geochronologic evidence, I extrapolated this residence time to the entire Western Ghats and obtained an estimate of 25,000–35,000 years for the duration of Western Ghats volcanism. This gave an eruptive rate of 30–40 km3/yr, which is much higher than any presently erupting volcano. This result will remain speculative until a similarly detailed analytical-modeling study is performed for the rest of the Western Ghats formations.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Carbon nanotubes (CNT) could serve as potential reinforcement for metal matrix composites for improved mechanical properties. However dispersion of carbon nanotubes (CNT) in the matrix has been a longstanding problem, since they tend to form clusters to minimize their surface area. The aim of this study was to use plasma and cold spraying techniques to synthesize CNT reinforced aluminum composite with improved dispersion and to quantify the degree of CNT dispersion as it influences the mechanical properties. Novel method of spray drying was used to disperse CNTs in Al-12 wt.% Si prealloyed powder, which was used as feedstock for plasma and cold spraying. A new method for quantification of CNT distribution was developed. Two parameters for CNT dispersion quantification, namely Dispersion parameter (DP) and Clustering Parameter (CP) have been proposed based on the image analysis and distance between the centers of CNTs. Nanomechanical properties were correlated with the dispersion of CNTs in the microstructure. Coating microstructure evolution has been discussed in terms of splat formation, deformation and damage of CNTs and CNT/matrix interface. Effect of Si and CNT content on the reaction at CNT/matrix interface was thermodynamically and kinetically studied. A pseudo phase diagram was computed which predicts the interfacial carbide for reaction between CNT and Al-Si alloy at processing temperature. Kinetic aspects showed that Al4C3 forms with Al-12 wt.% Si alloy while SiC forms with Al-23wt.% Si alloy. Mechanical properties at nano, micro and macro-scale were evaluated using nanoindentation and nanoscratch, microindentation and bulk tensile testing respectively. Nano and micro-scale mechanical properties (elastic modulus, hardness and yield strength) displayed improvement whereas macro-scale mechanical properties were poor. The inversion of the mechanical properties at different scale length was attributed to the porosity, CNT clustering, CNT-splat adhesion and Al 4C3 formation at the CNT/matrix interface. The Dispersion parameter (DP) was more sensitive than Clustering parameter (CP) in measuring degree of CNT distribution in the matrix.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Developing analytical models that can accurately describe behaviors of Internet-scale networks is difficult. This is due, in part, to the heterogeneous structure, immense size and rapidly changing properties of today's networks. The lack of analytical models makes large-scale network simulation an indispensable tool for studying immense networks. However, large-scale network simulation has not been commonly used to study networks of Internet-scale. This can be attributed to three factors: 1) current large-scale network simulators are geared towards simulation research and not network research, 2) the memory required to execute an Internet-scale model is exorbitant, and 3) large-scale network models are difficult to validate. This dissertation tackles each of these problems. ^ First, this work presents a method for automatically enabling real-time interaction, monitoring, and control of large-scale network models. Network researchers need tools that allow them to focus on creating realistic models and conducting experiments. However, this should not increase the complexity of developing a large-scale network simulator. This work presents a systematic approach to separating the concerns of running large-scale network models on parallel computers and the user facing concerns of configuring and interacting with large-scale network models. ^ Second, this work deals with reducing memory consumption of network models. As network models become larger, so does the amount of memory needed to simulate them. This work presents a comprehensive approach to exploiting structural duplications in network models to dramatically reduce the memory required to execute large-scale network experiments. ^ Lastly, this work addresses the issue of validating large-scale simulations by integrating real protocols and applications into the simulation. With an emulation extension, a network simulator operating in real-time can run together with real-world distributed applications and services. As such, real-time network simulation not only alleviates the burden of developing separate models for applications in simulation, but as real systems are included in the network model, it also increases the confidence level of network simulation. This work presents a scalable and flexible framework to integrate real-world applications with real-time simulation.^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

How children rate vegetables may be influenced by the preparation method. The primary objective of this study was for first grade students to be involved in a cooking demonstration and to taste and rate vegetables raw and cooked. First grade children of two classes (N= 52: 18 boys and 34 girls (approximately half Hispanic) that had assented and had signed parental consent participated in the study. The degree of liking a particular vegetable was recorded by the students using a hedonic scale of five commonly eaten vegetables tasted first raw (pre-demonstration) and then cooked (post-demonstration). A food habit questionnaire was filled out by parents to evaluate their mealtime practices and beliefs about their child’s eating habits. Paired sample t-tests revealed significant differences in preferences for vegetables in their raw and cooked states. Several mealtime characteristics were significantly associated with children’s vegetable preferences. Parents who reported being satisfied with how often the family eats evening meals together were more likely to report that their child eats adequate vegetables for their health (p=0.026). Parents who stated that they were satisfied with their child’s eating habits were more likely to report that their child was trying new foods (p<.001). Cooking demonstrations by nutrition professionals may be an important strategy that can be used by parents and teachers to promote vegetable intake. It is important that nutrition professionals provide guidance to encourage consumption of vegetables for parents so that they can model the behavior of healthy food consumption to their children.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Carbon nanotubes (CNT) could serve as potential reinforcement for metal matrix composites for improved mechanical properties. However dispersion of carbon nanotubes (CNT) in the matrix has been a longstanding problem, since they tend to form clusters to minimize their surface area. The aim of this study was to use plasma and cold spraying techniques to synthesize CNT reinforced aluminum composite with improved dispersion and to quantify the degree of CNT dispersion as it influences the mechanical properties. Novel method of spray drying was used to disperse CNTs in Al-12 wt.% Si pre-alloyed powder, which was used as feedstock for plasma and cold spraying. A new method for quantification of CNT distribution was developed. Two parameters for CNT dispersion quantification, namely Dispersion parameter (DP) and Clustering Parameter (CP) have been proposed based on the image analysis and distance between the centers of CNTs. Nanomechanical properties were correlated with the dispersion of CNTs in the microstructure. Coating microstructure evolution has been discussed in terms of splat formation, deformation and damage of CNTs and CNT/matrix interface. Effect of Si and CNT content on the reaction at CNT/matrix interface was thermodynamically and kinetically studied. A pseudo phase diagram was computed which predicts the interfacial carbide for reaction between CNT and Al-Si alloy at processing temperature. Kinetic aspects showed that Al4C3 forms with Al-12 wt.% Si alloy while SiC forms with Al-23wt.% Si alloy. Mechanical properties at nano, micro and macro-scale were evaluated using nanoindentation and nanoscratch, microindentation and bulk tensile testing respectively. Nano and micro-scale mechanical properties (elastic modulus, hardness and yield strength) displayed improvement whereas macro-scale mechanical properties were poor. The inversion of the mechanical properties at different scale length was attributed to the porosity, CNT clustering, CNT-splat adhesion and Al4C3 formation at the CNT/matrix interface. The Dispersion parameter (DP) was more sensitive than Clustering parameter (CP) in measuring degree of CNT distribution in the matrix.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This dissertation presents a calibration procedure for a pressure velocity probe. The dissertation is divided into four main chapters. The first chapter is divided into six main sections. In the firsts two, the wave equation in fluids and the velocity of sound in gases are calculated, the third section contains a general solution of the wave equation in the case of plane acoustic waves. Section four and five report the definition of the acoustic impedance and admittance, and the practical units the sound level is measured with, i.e. the decibel scale. Finally, the last section of the chapter is about the theory linked to the frequency analysis of a sound wave and includes the analysis of sound in bands and the discrete Fourier analysis, with the definition of some important functions. The second chapter describes different reference field calibration procedures that are used to calibrate the P-V probes, between them the progressive plane wave method, which is that has been used in this work. Finally, the last section of the chapter contains a description of the working principles of the two transducers that have been used, with a focus on the velocity one. The third chapter of the dissertation is devoted to the explanation of the calibration set up and the instruments used for the data acquisition and analysis. Since software routines were extremely important, this chapter includes a dedicated section on them and the proprietary routines most used are thoroughly explained. Finally, there is the description of the work that has been done, which is identified with three different phases, where the data acquired and the results obtained are presented. All the graphs and data reported were obtained through the Matlab® routine. As for the last chapter, it briefly presents all the work that has been done as well as an excursus on a new probe and on the way the procedure implemented in this dissertation could be applied in the case of a general field.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background: Research suggests that patients presenting to hospital with self-cutting differ from those with intentional overdose in demographic and clinical characteristics. However, large-scale national studies comparing self-cutting patients with those using other self-harm methods are lacking. We aimed to compare hospital-treated self-cutting and intentional overdose, to examine the role of gender in moderating these differences, and examine the characteristics and outcomes of those patients presenting with combined self-cutting and overdose. Methods: Between 2003 and 2010, the Irish National Registry of Deliberate Self-Harm recorded 42,585 self-harm presentations to Irish hospital emergency departments meeting the study inclusion criteria. Data were obtained on demographic and clinical characteristics by independent data registration officers. Results: Compared with overdose only, involvement of self-cutting (with or without overdose) was significantly more common in males than females, with an overrepresentation of males aged <35 years. Independent of gender, involvement of self-cutting (with or without overdose) was significantly associated with younger age, city residence, repetition within 30 days and repetition within a year (females only). Factors associated with self-cutting as the sole method were no fixed abode/living in an institution, presenting outside 9 a.m. to 5 p.m., not consuming alcohol and repetition between 31 days and 1 year (males only). Conclusion: The demographic and clinical differences between self-harm patients underline the presence of different subgroups with implications for service provision and prevention of repeated self-harm. Given the relationship between self-cutting and subsequent repetition, service providers need to ensure that adequate follow-up arrangements and supports are in place for the patient.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Many modern applications fall into the category of "large-scale" statistical problems, in which both the number of observations n and the number of features or parameters p may be large. Many existing methods focus on point estimation, despite the continued relevance of uncertainty quantification in the sciences, where the number of parameters to estimate often exceeds the sample size, despite huge increases in the value of n typically seen in many fields. Thus, the tendency in some areas of industry to dispense with traditional statistical analysis on the basis that "n=all" is of little relevance outside of certain narrow applications. The main result of the Big Data revolution in most fields has instead been to make computation much harder without reducing the importance of uncertainty quantification. Bayesian methods excel at uncertainty quantification, but often scale poorly relative to alternatives. This conflict between the statistical advantages of Bayesian procedures and their substantial computational disadvantages is perhaps the greatest challenge facing modern Bayesian statistics, and is the primary motivation for the work presented here.

Two general strategies for scaling Bayesian inference are considered. The first is the development of methods that lend themselves to faster computation, and the second is design and characterization of computational algorithms that scale better in n or p. In the first instance, the focus is on joint inference outside of the standard problem of multivariate continuous data that has been a major focus of previous theoretical work in this area. In the second area, we pursue strategies for improving the speed of Markov chain Monte Carlo algorithms, and characterizing their performance in large-scale settings. Throughout, the focus is on rigorous theoretical evaluation combined with empirical demonstrations of performance and concordance with the theory.

One topic we consider is modeling the joint distribution of multivariate categorical data, often summarized in a contingency table. Contingency table analysis routinely relies on log-linear models, with latent structure analysis providing a common alternative. Latent structure models lead to a reduced rank tensor factorization of the probability mass function for multivariate categorical data, while log-linear models achieve dimensionality reduction through sparsity. Little is known about the relationship between these notions of dimensionality reduction in the two paradigms. In Chapter 2, we derive several results relating the support of a log-linear model to nonnegative ranks of the associated probability tensor. Motivated by these findings, we propose a new collapsed Tucker class of tensor decompositions, which bridge existing PARAFAC and Tucker decompositions, providing a more flexible framework for parsimoniously characterizing multivariate categorical data. Taking a Bayesian approach to inference, we illustrate empirical advantages of the new decompositions.

Latent class models for the joint distribution of multivariate categorical, such as the PARAFAC decomposition, data play an important role in the analysis of population structure. In this context, the number of latent classes is interpreted as the number of genetically distinct subpopulations of an organism, an important factor in the analysis of evolutionary processes and conservation status. Existing methods focus on point estimates of the number of subpopulations, and lack robust uncertainty quantification. Moreover, whether the number of latent classes in these models is even an identified parameter is an open question. In Chapter 3, we show that when the model is properly specified, the correct number of subpopulations can be recovered almost surely. We then propose an alternative method for estimating the number of latent subpopulations that provides good quantification of uncertainty, and provide a simple procedure for verifying that the proposed method is consistent for the number of subpopulations. The performance of the model in estimating the number of subpopulations and other common population structure inference problems is assessed in simulations and a real data application.

In contingency table analysis, sparse data is frequently encountered for even modest numbers of variables, resulting in non-existence of maximum likelihood estimates. A common solution is to obtain regularized estimates of the parameters of a log-linear model. Bayesian methods provide a coherent approach to regularization, but are often computationally intensive. Conjugate priors ease computational demands, but the conjugate Diaconis--Ylvisaker priors for the parameters of log-linear models do not give rise to closed form credible regions, complicating posterior inference. In Chapter 4 we derive the optimal Gaussian approximation to the posterior for log-linear models with Diaconis--Ylvisaker priors, and provide convergence rate and finite-sample bounds for the Kullback-Leibler divergence between the exact posterior and the optimal Gaussian approximation. We demonstrate empirically in simulations and a real data application that the approximation is highly accurate, even in relatively small samples. The proposed approximation provides a computationally scalable and principled approach to regularized estimation and approximate Bayesian inference for log-linear models.

Another challenging and somewhat non-standard joint modeling problem is inference on tail dependence in stochastic processes. In applications where extreme dependence is of interest, data are almost always time-indexed. Existing methods for inference and modeling in this setting often cluster extreme events or choose window sizes with the goal of preserving temporal information. In Chapter 5, we propose an alternative paradigm for inference on tail dependence in stochastic processes with arbitrary temporal dependence structure in the extremes, based on the idea that the information on strength of tail dependence and the temporal structure in this dependence are both encoded in waiting times between exceedances of high thresholds. We construct a class of time-indexed stochastic processes with tail dependence obtained by endowing the support points in de Haan's spectral representation of max-stable processes with velocities and lifetimes. We extend Smith's model to these max-stable velocity processes and obtain the distribution of waiting times between extreme events at multiple locations. Motivated by this result, a new definition of tail dependence is proposed that is a function of the distribution of waiting times between threshold exceedances, and an inferential framework is constructed for estimating the strength of extremal dependence and quantifying uncertainty in this paradigm. The method is applied to climatological, financial, and electrophysiology data.

The remainder of this thesis focuses on posterior computation by Markov chain Monte Carlo. The Markov Chain Monte Carlo method is the dominant paradigm for posterior computation in Bayesian analysis. It has long been common to control computation time by making approximations to the Markov transition kernel. Comparatively little attention has been paid to convergence and estimation error in these approximating Markov Chains. In Chapter 6, we propose a framework for assessing when to use approximations in MCMC algorithms, and how much error in the transition kernel should be tolerated to obtain optimal estimation performance with respect to a specified loss function and computational budget. The results require only ergodicity of the exact kernel and control of the kernel approximation accuracy. The theoretical framework is applied to approximations based on random subsets of data, low-rank approximations of Gaussian processes, and a novel approximating Markov chain for discrete mixture models.

Data augmentation Gibbs samplers are arguably the most popular class of algorithm for approximately sampling from the posterior distribution for the parameters of generalized linear models. The truncated Normal and Polya-Gamma data augmentation samplers are standard examples for probit and logit links, respectively. Motivated by an important problem in quantitative advertising, in Chapter 7 we consider the application of these algorithms to modeling rare events. We show that when the sample size is large but the observed number of successes is small, these data augmentation samplers mix very slowly, with a spectral gap that converges to zero at a rate at least proportional to the reciprocal of the square root of the sample size up to a log factor. In simulation studies, moderate sample sizes result in high autocorrelations and small effective sample sizes. Similar empirical results are observed for related data augmentation samplers for multinomial logit and probit models. When applied to a real quantitative advertising dataset, the data augmentation samplers mix very poorly. Conversely, Hamiltonian Monte Carlo and a type of independence chain Metropolis algorithm show good mixing on the same dataset.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The full-scale base-isolated structure studied in this dissertation is the only base-isolated building in South Island of New Zealand. It sustained hundreds of earthquake ground motions from September 2010 and well into 2012. Several large earthquake responses were recorded in December 2011 by NEES@UCLA and by GeoNet recording station nearby Christchurch Women's Hospital. The primary focus of this dissertation is to advance the state-of-the art of the methods to evaluate performance of seismic-isolated structures and the effects of soil-structure interaction by developing new data processing methodologies to overcome current limitations and by implementing advanced numerical modeling in OpenSees for direct analysis of soil-structure interaction.

This dissertation presents a novel method for recovering force-displacement relations within the isolators of building structures with unknown nonlinearities from sparse seismic-response measurements of floor accelerations. The method requires only direct matrix calculations (factorizations and multiplications); no iterative trial-and-error methods are required. The method requires a mass matrix, or at least an estimate of the floor masses. A stiffness matrix may be used, but is not necessary. Essentially, the method operates on a matrix of incomplete measurements of floor accelerations. In the special case of complete floor measurements of systems with linear dynamics, real modes, and equal floor masses, the principal components of this matrix are the modal responses. In the more general case of partial measurements and nonlinear dynamics, the method extracts a number of linearly-dependent components from Hankel matrices of measured horizontal response accelerations, assembles these components row-wise and extracts principal components from the singular value decomposition of this large matrix of linearly-dependent components. These principal components are then interpolated between floors in a way that minimizes the curvature energy of the interpolation. This interpolation step can make use of a reduced-order stiffness matrix, a backward difference matrix or a central difference matrix. The measured and interpolated floor acceleration components at all floors are then assembled and multiplied by a mass matrix. The recovered in-service force-displacement relations are then incorporated into the OpenSees soil structure interaction model.

Numerical simulations of soil-structure interaction involving non-uniform soil behavior are conducted following the development of the complete soil-structure interaction model of Christchurch Women's Hospital in OpenSees. In these 2D OpenSees models, the superstructure is modeled as two-dimensional frames in short span and long span respectively. The lead rubber bearings are modeled as elastomeric bearing (Bouc Wen) elements. The soil underlying the concrete raft foundation is modeled with linear elastic plane strain quadrilateral element. The non-uniformity of the soil profile is incorporated by extraction and interpolation of shear wave velocity profile from the Canterbury Geotechnical Database. The validity of the complete two-dimensional soil-structure interaction OpenSees model for the hospital is checked by comparing the results of peak floor responses and force-displacement relations within the isolation system achieved from OpenSees simulations to the recorded measurements. General explanations and implications, supported by displacement drifts, floor acceleration and displacement responses, force-displacement relations are described to address the effects of soil-structure interaction.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Purpose: Computed Tomography (CT) is one of the standard diagnostic imaging modalities for the evaluation of a patient’s medical condition. In comparison to other imaging modalities such as Magnetic Resonance Imaging (MRI), CT is a fast acquisition imaging device with higher spatial resolution and higher contrast-to-noise ratio (CNR) for bony structures. CT images are presented through a gray scale of independent values in Hounsfield units (HU). High HU-valued materials represent higher density. High density materials, such as metal, tend to erroneously increase the HU values around it due to reconstruction software limitations. This problem of increased HU values due to metal presence is referred to as metal artefacts. Hip prostheses, dental fillings, aneurysm clips, and spinal clips are a few examples of metal objects that are of clinical relevance. These implants create artefacts such as beam hardening and photon starvation that distort CT images and degrade image quality. This is of great significance because the distortions may cause improper evaluation of images and inaccurate dose calculation in the treatment planning system. Different algorithms are being developed to reduce these artefacts for better image quality for both diagnostic and therapeutic purposes. However, very limited information is available about the effect of artefact correction on dose calculation accuracy. This research study evaluates the dosimetric effect of metal artefact reduction algorithms on severe artefacts on CT images. This study uses Gemstone Spectral Imaging (GSI)-based MAR algorithm, projection-based Metal Artefact Reduction (MAR) algorithm, and the Dual-Energy method.

Materials and Methods: The Gemstone Spectral Imaging (GSI)-based and SMART Metal Artefact Reduction (MAR) algorithms are metal artefact reduction protocols embedded in two different CT scanner models by General Electric (GE), and the Dual-Energy Imaging Method was developed at Duke University. All three approaches were applied in this research for dosimetric evaluation on CT images with severe metal artefacts. The first part of the research used a water phantom with four iodine syringes. Two sets of plans, multi-arc plans and single-arc plans, using the Volumetric Modulated Arc therapy (VMAT) technique were designed to avoid or minimize influences from high-density objects. The second part of the research used projection-based MAR Algorithm and the Dual-Energy Method. Calculated Doses (Mean, Minimum, and Maximum Doses) to the planning treatment volume (PTV) were compared and homogeneity index (HI) calculated.

Results: (1) Without the GSI-based MAR application, a percent error between mean dose and the absolute dose ranging from 3.4-5.7% per fraction was observed. In contrast, the error was decreased to a range of 0.09-2.3% per fraction with the GSI-based MAR algorithm. There was a percent difference ranging from 1.7-4.2% per fraction between with and without using the GSI-based MAR algorithm. (2) A range of 0.1-3.2% difference was observed for the maximum dose values, 1.5-10.4% for minimum dose difference, and 1.4-1.7% difference on the mean doses. Homogeneity indexes (HI) ranging from 0.068-0.065 for dual-energy method and 0.063-0.141 with projection-based MAR algorithm were also calculated.

Conclusion: (1) Percent error without using the GSI-based MAR algorithm may deviate as high as 5.7%. This error invalidates the goal of Radiation Therapy to provide a more precise treatment. Thus, GSI-based MAR algorithm was desirable due to its better dose calculation accuracy. (2) Based on direct numerical observation, there was no apparent deviation between the mean doses of different techniques but deviation was evident on the maximum and minimum doses. The HI for the dual-energy method almost achieved the desirable null values. In conclusion, the Dual-Energy method gave better dose calculation accuracy to the planning treatment volume (PTV) for images with metal artefacts than with or without GE MAR Algorithm.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The computational modeling of ocean waves and ocean-faring devices poses numerous challenges. Among these are the need to stably and accurately represent both the fluid-fluid interface between water and air as well as the fluid-structure interfaces arising between solid devices and one or more fluids. As techniques are developed to stably and accurately balance the interactions between fluid and structural solvers at these boundaries, a similarly pressing challenge is the development of algorithms that are massively scalable and capable of performing large-scale three-dimensional simulations on reasonable time scales. This dissertation introduces two separate methods for approaching this problem, with the first focusing on the development of sophisticated fluid-fluid interface representations and the second focusing primarily on scalability and extensibility to higher-order methods.

We begin by introducing the narrow-band gradient-augmented level set method (GALSM) for incompressible multiphase Navier-Stokes flow. This is the first use of the high-order GALSM for a fluid flow application, and its reliability and accuracy in modeling ocean environments is tested extensively. The method demonstrates numerous advantages over the traditional level set method, among these a heightened conservation of fluid volume and the representation of subgrid structures.

Next, we present a finite-volume algorithm for solving the incompressible Euler equations in two and three dimensions in the presence of a flow-driven free surface and a dynamic rigid body. In this development, the chief concerns are efficiency, scalability, and extensibility (to higher-order and truly conservative methods). These priorities informed a number of important choices: The air phase is substituted by a pressure boundary condition in order to greatly reduce the size of the computational domain, a cut-cell finite-volume approach is chosen in order to minimize fluid volume loss and open the door to higher-order methods, and adaptive mesh refinement (AMR) is employed to focus computational effort and make large-scale 3D simulations possible. This algorithm is shown to produce robust and accurate results that are well-suited for the study of ocean waves and the development of wave energy conversion (WEC) devices.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

People go through their life making all kinds of decisions, and some of these decisions affect their demand for transportation, for example, their choices of where to live and where to work, how and when to travel and which route to take. Transport related choices are typically time dependent and characterized by large number of alternatives that can be spatially correlated. This thesis deals with models that can be used to analyze and predict discrete choices in large-scale networks. The proposed models and methods are highly relevant for, but not limited to, transport applications. We model decisions as sequences of choices within the dynamic discrete choice framework, also known as parametric Markov decision processes. Such models are known to be difficult to estimate and to apply to make predictions because dynamic programming problems need to be solved in order to compute choice probabilities. In this thesis we show that it is possible to explore the network structure and the flexibility of dynamic programming so that the dynamic discrete choice modeling approach is not only useful to model time dependent choices, but also makes it easier to model large-scale static choices. The thesis consists of seven articles containing a number of models and methods for estimating, applying and testing large-scale discrete choice models. In the following we group the contributions under three themes: route choice modeling, large-scale multivariate extreme value (MEV) model estimation and nonlinear optimization algorithms. Five articles are related to route choice modeling. We propose different dynamic discrete choice models that allow paths to be correlated based on the MEV and mixed logit models. The resulting route choice models become expensive to estimate and we deal with this challenge by proposing innovative methods that allow to reduce the estimation cost. For example, we propose a decomposition method that not only opens up for possibility of mixing, but also speeds up the estimation for simple logit models, which has implications also for traffic simulation. Moreover, we compare the utility maximization and regret minimization decision rules, and we propose a misspecification test for logit-based route choice models. The second theme is related to the estimation of static discrete choice models with large choice sets. We establish that a class of MEV models can be reformulated as dynamic discrete choice models on the networks of correlation structures. These dynamic models can then be estimated quickly using dynamic programming techniques and an efficient nonlinear optimization algorithm. Finally, the third theme focuses on structured quasi-Newton techniques for estimating discrete choice models by maximum likelihood. We examine and adapt switching methods that can be easily integrated into usual optimization algorithms (line search and trust region) to accelerate the estimation process. The proposed dynamic discrete choice models and estimation methods can be used in various discrete choice applications. In the area of big data analytics, models that can deal with large choice sets and sequential choices are important. Our research can therefore be of interest in various demand analysis applications (predictive analytics) or can be integrated with optimization models (prescriptive analytics). Furthermore, our studies indicate the potential of dynamic programming techniques in this context, even for static models, which opens up a variety of future research directions.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The MAREDAT atlas covers 11 types of plankton, ranging in size from bacteria to jellyfish. Together, these plankton groups determine the health and productivity of the global ocean and play a vital role in the global carbon cycle. Working within a uniform and consistent spatial and depth grid (map) of the global ocean, the researchers compiled thousands and tens of thousands of data points to identify regions of plankton abundance and scarcity as well as areas of data abundance and scarcity. At many of the grid points, the MAREDAT team accomplished the difficult conversion from abundance (numbers of organisms) to biomass (carbon mass of organisms). The MAREDAT atlas provides an unprecedented global data set for ecological and biochemical analysis and modeling as well as a clear mandate for compiling additional existing data and for focusing future data gathering efforts on key groups in key areas of the ocean. The present data set presents depth integrated values of diazotrophs nitrogen fixation rates, computed from a collection of source data sets.