873 resultados para Multi-scale modeling
Resumo:
Purpose: To build a model that will predict the survival time for patients that were treated with stereotactic radiosurgery for brain metastases using support vector machine (SVM) regression.
Methods and Materials: This study utilized data from 481 patients, which were equally divided into training and validation datasets randomly. The SVM model used a Gaussian RBF function, along with various parameters, such as the size of the epsilon insensitive region and the cost parameter (C) that are used to control the amount of error tolerated by the model. The predictor variables for the SVM model consisted of the actual survival time of the patient, the number of brain metastases, the graded prognostic assessment (GPA) and Karnofsky Performance Scale (KPS) scores, prescription dose, and the largest planning target volume (PTV). The response of the model is the survival time of the patient. The resulting survival time predictions were analyzed against the actual survival times by single parameter classification and two-parameter classification. The predicted mean survival times within each classification were compared with the actual values to obtain the confidence interval associated with the model’s predictions. In addition to visualizing the data on plots using the means and error bars, the correlation coefficients between the actual and predicted means of the survival times were calculated during each step of the classification.
Results: The number of metastases and KPS scores, were consistently shown to be the strongest predictors in the single parameter classification, and were subsequently used as first classifiers in the two-parameter classification. When the survival times were analyzed with the number of metastases as the first classifier, the best correlation was obtained for patients with 3 metastases, while patients with 4 or 5 metastases had significantly worse results. When the KPS score was used as the first classifier, patients with a KPS score of 60 and 90/100 had similar strong correlation results. These mixed results are likely due to the limited data available for patients with more than 3 metastases or KPS scores of 60 or less.
Conclusions: The number of metastases and the KPS score both showed to be strong predictors of patient survival time. The model was less accurate for patients with more metastases and certain KPS scores due to the lack of training data.
A New Method for Modeling Free Surface Flows and Fluid-structure Interaction with Ocean Applications
Resumo:
The computational modeling of ocean waves and ocean-faring devices poses numerous challenges. Among these are the need to stably and accurately represent both the fluid-fluid interface between water and air as well as the fluid-structure interfaces arising between solid devices and one or more fluids. As techniques are developed to stably and accurately balance the interactions between fluid and structural solvers at these boundaries, a similarly pressing challenge is the development of algorithms that are massively scalable and capable of performing large-scale three-dimensional simulations on reasonable time scales. This dissertation introduces two separate methods for approaching this problem, with the first focusing on the development of sophisticated fluid-fluid interface representations and the second focusing primarily on scalability and extensibility to higher-order methods.
We begin by introducing the narrow-band gradient-augmented level set method (GALSM) for incompressible multiphase Navier-Stokes flow. This is the first use of the high-order GALSM for a fluid flow application, and its reliability and accuracy in modeling ocean environments is tested extensively. The method demonstrates numerous advantages over the traditional level set method, among these a heightened conservation of fluid volume and the representation of subgrid structures.
Next, we present a finite-volume algorithm for solving the incompressible Euler equations in two and three dimensions in the presence of a flow-driven free surface and a dynamic rigid body. In this development, the chief concerns are efficiency, scalability, and extensibility (to higher-order and truly conservative methods). These priorities informed a number of important choices: The air phase is substituted by a pressure boundary condition in order to greatly reduce the size of the computational domain, a cut-cell finite-volume approach is chosen in order to minimize fluid volume loss and open the door to higher-order methods, and adaptive mesh refinement (AMR) is employed to focus computational effort and make large-scale 3D simulations possible. This algorithm is shown to produce robust and accurate results that are well-suited for the study of ocean waves and the development of wave energy conversion (WEC) devices.
Resumo:
As a device, the laser is an elegant conglomerate of elementary physical theories and state-of-the-art techniques ranging from quantum mechanics, thermal and statistical physics, material growth and non-linear mathematics. The laser has been a commercial success in medicine and telecommunication while driving the development of highly optimised devices specifically designed for a plethora of uses. Due to their low-cost and large-scale predictability many aspects of modern life would not function without the lasers. However, the laser is also a window into a system that is strongly emulated by non-linear mathematical systems and are an exceptional apparatus in the development of non-linear dynamics and is often used in the teaching of non-trivial mathematics. While single-mode semiconductor lasers have been well studied, a unified comparison of single and two-mode lasers is still needed to extend the knowledge of semiconductor lasers, as well as testing the limits of current model. Secondly, this work aims to utilise the optically injected semiconductor laser as a tool so study non-linear phenomena in other fields of study, namely ’Rogue waves’ that have been previously witnessed in oceanography and are suspected as having non-linear origins. The first half of this thesis includes a reliable and fast technique to categorise the dynamical state of optically injected two mode and single mode lasers. Analysis of the experimentally obtained time-traces revealed regions of various dynamics and allowed the automatic identification of their respective stability. The impact of this method is also extended to the detection regions containing bi-stabilities. The second half of the thesis presents an investigation into the origins of Rogue Waves in single mode lasers. After confirming their existence in single mode lasers, their distribution in time and sudden appearance in the time-series is studied to justify their name. An examination is also performed into the existence of paths that make Rogue Waves possible and the impact of noise on their distribution is also studied.
Resumo:
People go through their life making all kinds of decisions, and some of these decisions affect their demand for transportation, for example, their choices of where to live and where to work, how and when to travel and which route to take. Transport related choices are typically time dependent and characterized by large number of alternatives that can be spatially correlated. This thesis deals with models that can be used to analyze and predict discrete choices in large-scale networks. The proposed models and methods are highly relevant for, but not limited to, transport applications. We model decisions as sequences of choices within the dynamic discrete choice framework, also known as parametric Markov decision processes. Such models are known to be difficult to estimate and to apply to make predictions because dynamic programming problems need to be solved in order to compute choice probabilities. In this thesis we show that it is possible to explore the network structure and the flexibility of dynamic programming so that the dynamic discrete choice modeling approach is not only useful to model time dependent choices, but also makes it easier to model large-scale static choices. The thesis consists of seven articles containing a number of models and methods for estimating, applying and testing large-scale discrete choice models. In the following we group the contributions under three themes: route choice modeling, large-scale multivariate extreme value (MEV) model estimation and nonlinear optimization algorithms. Five articles are related to route choice modeling. We propose different dynamic discrete choice models that allow paths to be correlated based on the MEV and mixed logit models. The resulting route choice models become expensive to estimate and we deal with this challenge by proposing innovative methods that allow to reduce the estimation cost. For example, we propose a decomposition method that not only opens up for possibility of mixing, but also speeds up the estimation for simple logit models, which has implications also for traffic simulation. Moreover, we compare the utility maximization and regret minimization decision rules, and we propose a misspecification test for logit-based route choice models. The second theme is related to the estimation of static discrete choice models with large choice sets. We establish that a class of MEV models can be reformulated as dynamic discrete choice models on the networks of correlation structures. These dynamic models can then be estimated quickly using dynamic programming techniques and an efficient nonlinear optimization algorithm. Finally, the third theme focuses on structured quasi-Newton techniques for estimating discrete choice models by maximum likelihood. We examine and adapt switching methods that can be easily integrated into usual optimization algorithms (line search and trust region) to accelerate the estimation process. The proposed dynamic discrete choice models and estimation methods can be used in various discrete choice applications. In the area of big data analytics, models that can deal with large choice sets and sequential choices are important. Our research can therefore be of interest in various demand analysis applications (predictive analytics) or can be integrated with optimization models (prescriptive analytics). Furthermore, our studies indicate the potential of dynamic programming techniques in this context, even for static models, which opens up a variety of future research directions.
Resumo:
With the introduction of new input devices, such as multi-touch surface displays, the Nintendo WiiMote, the Microsoft Kinect, and the Leap Motion sensor, among others, the field of Human-Computer Interaction (HCI) finds itself at an important crossroads that requires solving new challenges. Given the amount of three-dimensional (3D) data available today, 3D navigation plays an important role in 3D User Interfaces (3DUI). This dissertation deals with multi-touch, 3D navigation, and how users can explore 3D virtual worlds using a multi-touch, non-stereo, desktop display. The contributions of this dissertation include a feature-extraction algorithm for multi-touch displays (FETOUCH), a multi-touch and gyroscope interaction technique (GyroTouch), a theoretical model for multi-touch interaction using high-level Petri Nets (PeNTa), an algorithm to resolve ambiguities in the multi-touch gesture classification process (Yield), a proposed technique for navigational experiments (FaNS), a proposed gesture (Hold-and-Roll), and an experiment prototype for 3D navigation (3DNav). The verification experiment for 3DNav was conducted with 30 human-subjects of both genders. The experiment used the 3DNav prototype to present a pseudo-universe, where each user was required to find five objects using the multi-touch display and five objects using a game controller (GamePad). For the multi-touch display, 3DNav used a commercial library called GestureWorks in conjunction with Yield to resolve the ambiguity posed by the multiplicity of gestures reported by the initial classification. The experiment compared both devices. The task completion time with multi-touch was slightly shorter, but the difference was not statistically significant. The design of experiment also included an equation that determined the level of video game console expertise of the subjects, which was used to break down users into two groups: casual users and experienced users. The study found that experienced gamers performed significantly faster with the GamePad than casual users. When looking at the groups separately, casual gamers performed significantly better using the multi-touch display, compared to the GamePad. Additional results are found in this dissertation.
Resumo:
Within Canada there are more than 2.5 million bundles of spent nuclear fuel with another approximately 2 million bundles to be generated in the future. Canada, and every country around the world that has taken a decision on management of spent nuclear fuel, has decided on long-term containment and isolation of the fuel within a deep geological repository. At depth, a deep geological repository consists of a network of placement rooms where the bundles will be located within a multi-layered system that incorporates engineered and natural barriers. The barriers will be placed in a complex thermal-hydraulic-mechanical-chemical-biological (THMCB) environment. A large database of material properties for all components in the repository are required to construct representative models. Within the repository, the sealing materials will experience elevated temperatures due to the thermal gradient produced by radioactive decay heat from the waste inside the container. Furthermore, high porewater pressure due to the depth of repository along with possibility of elevated salinity of groundwater would cause the bentonite-based materials to be under transient hydraulic conditions. Therefore it is crucial to characterize the sealing materials over a wide range of thermal-hydraulic conditions. A comprehensive experimental program has been conducted to measure properties (mainly focused on thermal properties) of all sealing materials involved in Mark II concept at plausible thermal-hydraulic conditions. The thermal response of Canada’s concept for a deep geological repository has been modelled using experimentally measured thermal properties. Plausible scenarios are defined and the effects of these scenarios are examined on the container surface temperature as well as the surrounding geosphere to assess whether they meet design criteria for the cases studied. The thermal response shows that if all the materials even being at dried condition, repository still performs acceptably as long as sealing materials remain in contact.
Resumo:
Introduction Quantitative and accurate measurements of fat and muscle in the body are important for prevention and diagnosis of diseases related to obesity and muscle degeneration. Manually segmenting muscle and fat compartments in MR body-images is laborious and time-consuming, hindering implementation in large cohorts. In the present study, the feasibility and success-rate of a Dixon-based MR scan followed by an intensity-normalised, non-rigid, multi-atlas based segmentation was investigated in a cohort of 3,000 subjects. Materials and Methods 3,000 participants in the in-depth phenotyping arm of the UK Biobank imaging study underwent a comprehensive MR examination. All subjects were scanned using a 1.5 T MR-scanner with the dual-echo Dixon Vibe protocol, covering neck to knees. Subjects were scanned with six slabs in supine position, without localizer. Automated body composition analysis was performed using the AMRA Profiler™ system, to segment and quantify visceral adipose tissue (VAT), abdominal subcutaneous adipose tissue (ASAT) and thigh muscles. Technical quality assurance was performed and a standard set of acceptance/rejection criteria was established. Descriptive statistics were calculated for all volume measurements and quality assurance metrics. Results Of the 3,000 subjects, 2,995 (99.83%) were analysable for body fat, 2,828 (94.27%) were analysable when body fat and one thigh was included, and 2,775 (92.50%) were fully analysable for body fat and both thigh muscles. Reasons for not being able to analyse datasets were mainly due to missing slabs in the acquisition, or patient positioned so that large parts of the volume was outside of the field-of-view. Discussion and Conclusions In conclusion, this study showed that the rapid UK Biobank MR-protocol was well tolerated by most subjects and sufficiently robust to achieve very high success-rate for body composition analysis. This research has been conducted using the UK Biobank Resource.
Resumo:
A compositional multivariate approach is used to analyse regional scale soil geochemical data obtained as part of the Tellus Project generated by the Geological Survey Northern Ireland (GSNI). The multi-element total concentration data presented comprise XRF analyses of 6862 rural soil samples collected at 20cm depths on a non-aligned grid at one site per 2 km2. Censored data were imputed using published detection limits. Using these imputed values for 46 elements (including LOI), each soil sample site was assigned to the regional geology map provided by GSNI initially using the dominant lithology for the map polygon. Northern Ireland includes a diversity of geology representing a stratigraphic record from the Mesoproterozoic, up to and including the Palaeogene. However, the advance of ice sheets and their meltwaters over the last 100,000 years has left at least 80% of the bedrock covered by superficial deposits, including glacial till and post-glacial alluvium and peat. The question is to what extent the soil geochemistry reflects the underlying geology or superficial deposits. To address this, the geochemical data were transformed using centered log ratios (clr) to observe the requirements of compositional data analysis and avoid closure issues. Following this, compositional multivariate techniques including compositional Principal Component Analysis (PCA) and minimum/maximum autocorrelation factor (MAF) analysis method were used to determine the influence of underlying geology on the soil geochemistry signature. PCA showed that 72% of the variation was determined by the first four principal components (PC’s) implying “significant” structure in the data. Analysis of variance showed that only 10 PC’s were necessary to classify the soil geochemical data. To consider an improvement over PCA that uses the spatial relationships of the data, a classification based on MAF analysis was undertaken using the first 6 dominant factors. Understanding the relationship between soil geochemistry and superficial deposits is important for environmental monitoring of fragile ecosystems such as peat. To explore whether peat cover could be predicted from the classification, the lithology designation was adapted to include the presence of peat, based on GSNI superficial deposit polygons and linear discriminant analysis (LDA) undertaken. Prediction accuracy for LDA classification improved from 60.98% based on PCA using 10 principal components to 64.73% using MAF based on the 6 most dominant factors. The misclassification of peat may reflect degradation of peat covered areas since the creation of superficial deposit classification. Further work will examine the influence of underlying lithologies on elemental concentrations in peat composition and the effect of this in classification analysis.
Resumo:
The branched vs. isoprenoid tetraether (BIT) index is based on the relative abundance of branched tetraether lipids (brGDGTs) and the isoprenoidal GDGT crenarchaeol. In Lake Challa sediments the BIT index has been applied as a proxy for local monsoon precipitation on the assumption that the primary source of brGDGTs is soil washed in from the lake's catchment. Since then, microbial production within the water column has been identified as the primary source of brGDGTs in Lake Challa sediments, meaning that either an alternative mechanism links BIT index variation with rainfall or that the proxy's application must be reconsidered. We investigated GDGT concentrations and BIT index variation in Lake Challa sediments at a decadal resolution over the past 2200 years, in combination with GDGT time-series data from 45 monthly sediment-trap samples and a chronosequence of profundal surface sediments.
Our 2200-year geochemical record reveals high-frequency variability in GDGT concentrations, and therefore in the BIT index, superimposed on distinct lower-frequency fluctuations at multi-decadal to century timescales. These changes in BIT index are correlated with changes in the concentration of crenarchaeol but not with those of the brGDGTs. A clue for understanding the indirect link between rainfall and crenarchaeol concentration (and thus thaumarchaeotal abundance) was provided by the observation that surface sediments collected in January 2010 show a distinct shift in GDGT composition relative to sediments collected in August 2007. This shift is associated with increased bulk flux of settling mineral particles with high Ti / Al ratios during March–April 2008, reflecting an event of unusually high detrital input to Lake Challa concurrent with intense precipitation at the onset of the principal rain season that year. Although brGDGT distributions in the settling material are initially unaffected, this soil-erosion event is succeeded by a massive dry-season diatom bloom in July–September 2008 and a concurrent increase in the flux of GDGT-0. Complete absence of crenarchaeol in settling particles during the austral summer following this bloom indicates that no Thaumarchaeota bloom developed at that time. We suggest that increased nutrient availability, derived from the eroded soil washed into the lake, caused the massive bloom of diatoms and that the higher concentrations of ammonium (formed from breakdown of this algal matter) resulted in a replacement of nitrifying Thaumarchaeota, which in typical years prosper during the austral summer, by nitrifying bacteria. The decomposing dead diatoms passing through the suboxic zone of the water column probably also formed a substrate for GDGT-0-producing archaea. Hence, through a cascade of events, intensive rainfall affects thaumarchaeotal abundance, resulting in high BIT index values.
Decade-scale BIT index fluctuations in Lake Challa sediments exactly match the timing of three known episodes of prolonged regional drought within the past 250 years. Additionally, the principal trends of inferred rainfall variability over the past two millennia are consistent with the hydroclimatic history of equatorial East Africa, as has been documented from other (but less well dated) regional lake records. We therefore propose that variation in GDGT production originating from the episodic recurrence of strong soil-erosion events, when integrated over (multi-)decadal and longer timescales, generates a stable positive relationship between the sedimentary BIT index and monsoon rainfall at Lake Challa. Application of this paleoprecipitation proxy at other sites requires ascertaining the local processes which affect the productivity of crenarchaeol by Thaumarchaeota and brGDGTs.
Resumo:
The European Union continues to exert a large influence on the direction of member states energy policy. The 2020 targets for renewable energy integration have had significant impact on the operation of current power systems, forcing a rapid change from fossil fuel dominated systems to those with high levels of renewable power. Additionally, the overarching aim of an internal energy market throughout Europe has and will continue to place importance on multi-jurisdictional co-operation regarding energy supply. Combining these renewable energy and multi-jurisdictional supply goals results in a complicated multi-vector energy system, where the understanding of interactions between fossil fuels, renewable energy, interconnection and economic power system operation is increasingly important. This paper provides a novel and systematic methodology to fully understand the changing dynamics of interconnected energy systems from a gas and power perspective. A fully realistic unit commitment and economic dispatch model of the 2030 power systems in Great Britain and Ireland, combined with a representative gas transmission energy flow model is developed. The importance of multi-jurisdictional integrated energy system operation in one of the most strategically important renewable energy regions is demonstrated.
Resumo:
A major weakness among loading models for pedestrians walking on flexible structures proposed in recent years is the various uncorroborated assumptions made in their development. This applies to spatio-temporal characteristics of pedestrian loading and the nature of multi-object interactions. To alleviate this problem, a framework for the determination of localised pedestrian forces on full-scale structures is presented using a wireless attitude and heading reference systems (AHRS). An AHRS comprises a triad of tri-axial accelerometers, gyroscopes and magnetometers managed by a dedicated data processing unit, allowing motion in three-dimensional space to be reconstructed. A pedestrian loading model based on a single point inertial measurement from an AHRS is derived and shown to perform well against benchmark data collected on an instrumented treadmill. Unlike other models, the current model does not take any predefined form nor does it require any extrapolations as to the timing and amplitude of pedestrian loading. In order to assess correctly the influence of the moving pedestrian on behaviour of a structure, an algorithm for tracking the point of application of pedestrian force is developed based on data from a single AHRS attached to a foot. A set of controlled walking tests with a single pedestrian is conducted on a real footbridge for validation purposes. A remarkably good match between the measured and simulated bridge response is found, indeed confirming applicability of the proposed framework.
Resumo:
Microturbines are among the most successfully commercialized distributed energy resources, especially when they are used for combined heat and power generation. However, the interrelated thermal and electrical system dynamic behaviors have not been fully investigated. This is technically challenging due to the complex thermo-fluid-mechanical energy conversion processes which introduce multiple time-scale dynamics and strong nonlinearity into the analysis. To tackle this problem, this paper proposes a simplified model which can predict the coupled thermal and electric output dynamics of microturbines. Considering the time-scale difference of various dynamic processes occuring within microturbines, the electromechanical subsystem is treated as a fast quasi-linear process while the thermo-mechanical subsystem is treated as a slow process with high nonlinearity. A three-stage subspace identification method is utilized to capture the dominant dynamics and predict the electric power output. For the thermo-mechanical process, a radial basis function model trained by the particle swarm optimization method is employed to handle the strong nonlinear characteristics. Experimental tests on a Capstone C30 microturbine show that the proposed modeling method can well capture the system dynamics and produce a good prediction of the coupled thermal and electric outputs in various operating modes.
Resumo:
Green energy and Green technology are the most of the quoted terms in the context of modern science and technology. Technology which is close to nature is the necessity of the modern world which is haunted by global warming and climatic alterations. Proper utilization of solar energy is one of the goals of Green Energy Movement. The present thesis deals with the work carried out in the eld of nanotechnology and its possible use in various applications (employing natural dyes) like solar cells. Unlike arti cial dyes, the natural dyes are available, easy to prepare, low in cost, non-toxic, environmentally friendly and fully biodegradable. Looking to the 21st century, the nano/micro sciences will be a chief contributor to scienti c and technological developments. As nanotechnology progresses and complex nanosystems are fabricated, a growing impetus is being given to the development of multi-functional and size-dependent materials. The control of the morphology, from the nano to the micrometer scales, associated with the incorporation of several functionalities can yield entirely new smart hybrid materials. They are special class of materials which provide a new method for the improvement of the environmental stability of the material with interesting optical properties and opening a land of opportunities for applications in the eld of photonics. Zinc oxide (ZnO) is one such multipurpose material that has been explored for applications in sensing, environmental monitoring, and bio-medical systems and communications technology. Understanding the growth mechanism and tailoring their morphology is essential for the use of ZnO crystals as nano/micro electromechanical systems and also as building blocks of other nanosystems.
Resumo:
In questo lavoro di tesi si presenta il primo studio multi-scala e multi-frequenza focalizzato sul getto della radiogalassia IC1531 (z=0.026) con i satelliti Chandra, XMM-Newton e Fermi con l’obiettivo di tracciarne l’emissione alle alte energie; definire i processi radiativi responsabili dell’emissione osservata e stimare i principali parametri fisici del getto; stimare l’energetica del getto alle diverse scale. La sorgente è stata selezionata per la presenza di un getto esteso (≈5’’) osservato in radio e ai raggi X, inoltre, era riportata come possibile controparte della sorgente gamma 3FGLJ0009.6-3211 presente nel terzo catalogo Fermi (3FGL). La presenza di emissione ai raggi γ, confermata dal nostro studio, è importante per la modellizzazione della SED della regione nucleare. L’emissione X del nucleo è dominata da una componente ben riprodotta da una legge di potenza con indice spettrale Γ=2.2. L’analisi dell’emissione in banda gamma ha evidenziato una variabilità su scale di 5 giorni, dalla quale è stato possibile stimare le dimensioni delle regione emittente. Inoltre viene presentato lo studio della distribuzione spettrale dell’energia della regione nucleare di IC 1531 dalla banda radio ai raggi γ. I modelli ci permettono di determinare la natura dell’emissione gamma e stimare la potenza cinetica del getto a scale del su-pc. Gli osservabili sono stati utilizzati per ottenere le stime sui parametri del modello. La modellizzazione così ottenuta ha permesso di stimare i parametri fisici del getto e la potenza trasportata del getto a scale del sub-pc. Le stime a 151MHz suggerisco che il getto abbia basse velocita' (Γ≤7) e angolo di inclinazione rispetto alla linea di vista 10°<ϑ<20°; nel complesso, il trasporto di energia da parte del getto risulta efficiente. L’origine dell’emissione X del getto a grandi scale è consistente con un’emissione di sincrotrone, che conferma la classificazione di IC1531 come sorgente di bassa potenza MAGN.
Resumo:
Thesis (Ph.D.)--University of Washington, 2016-08