932 resultados para Subgrid Scale Model
Resumo:
The last two decades have seen intense scientific and regulatory interest in the health effects of particulate matter (PM). Influential epidemiological studies that characterize chronic exposure of individuals rely on monitoring data that are sparse in space and time, so they often assign the same exposure to participants in large geographic areas and across time. We estimate monthly PM during 1988-2002 in a large spatial domain for use in studying health effects in the Nurses' Health Study. We develop a conceptually simple spatio-temporal model that uses a rich set of covariates. The model is used to estimate concentrations of PM10 for the full time period and PM2.5 for a subset of the period. For the earlier part of the period, 1988-1998, few PM2.5 monitors were operating, so we develop a simple extension to the model that represents PM2.5 conditionally on PM10 model predictions. In the epidemiological analysis, model predictions of PM10 are more strongly associated with health effects than when using simpler approaches to estimate exposure. Our modeling approach supports the application in estimating both fine-scale and large-scale spatial heterogeneity and capturing space-time interaction through the use of monthly-varying spatial surfaces. At the same time, the model is computationally feasible, implementable with standard software, and readily understandable to the scientific audience. Despite simplifying assumptions, the model has good predictive performance and uncertainty characterization.
Resumo:
Submicroscopic changes in chromosomal DNA copy number dosage are common and have been implicated in many heritable diseases and cancers. Recent high-throughput technologies have a resolution that permits the detection of segmental changes in DNA copy number that span thousands of basepairs across the genome. Genome-wide association studies (GWAS) may simultaneously screen for copy number-phenotype and SNP-phenotype associations as part of the analytic strategy. However, genome-wide array analyses are particularly susceptible to batch effects as the logistics of preparing DNA and processing thousands of arrays often involves multiple laboratories and technicians, or changes over calendar time to the reagents and laboratory equipment. Failure to adjust for batch effects can lead to incorrect inference and requires inefficient post-hoc quality control procedures that exclude regions that are associated with batch. Our work extends previous model-based approaches for copy number estimation by explicitly modeling batch effects and using shrinkage to improve locus-specific estimates of copy number uncertainty. Key features of this approach include the use of diallelic genotype calls from experimental data to estimate batch- and locus-specific parameters of background and signal without the requirement of training data. We illustrate these ideas using a study of bipolar disease and a study of chromosome 21 trisomy. The former has batch effects that dominate much of the observed variation in quantile-normalized intensities, while the latter illustrates the robustness of our approach to datasets where as many as 25% of the samples have altered copy number. Locus-specific estimates of copy number can be plotted on the copy-number scale to investigate mosaicism and guide the choice of appropriate downstream approaches for smoothing the copy number as a function of physical position. The software is open source and implemented in the R package CRLMM available at Bioconductor (http:www.bioconductor.org).
Resumo:
This paper presents the German version of the Short Understanding of Substance Abuse Scale (SUSS) [Humphreys et al.: Psychol Addict Behav 1996;10:38-44], the Verstandnis von Storungen durch Substanzkonsum (VSS), and evaluates its psychometric properties. The VSS assesses clinicians' beliefs about the nature and treatment of substance use disorders, particularly their endorsement of psychosocial and disease orientation. The VSS was administered to 160 treatment staff members at 12 substance use disorder treatment programs in the German-speaking part of Switzerland. Because the confirmatory factor analysis of the VSS did not completely replicate the factorial structure of the SUSS, an exploratory factor analysis was undertaken. This analysis identified two factors: the Psychosocial model factor and a slightly different Disease model factor. The VSS Disease and Psychosocial subscales showed convergent and discriminant validity, as well as sufficient reliability.
Resumo:
Constructing a 3D surface model from sparse-point data is a nontrivial task. Here, we report an accurate and robust approach for reconstructing a surface model of the proximal femur from sparse-point data and a dense-point distribution model (DPDM). The problem is formulated as a three-stage optimal estimation process. The first stage, affine registration, is to iteratively estimate a scale and a rigid transformation between the mean surface model of the DPDM and the sparse input points. The estimation results of the first stage are used to establish point correspondences for the second stage, statistical instantiation, which stably instantiates a surface model from the DPDM using a statistical approach. This surface model is then fed to the third stage, kernel-based deformation, which further refines the surface model. Handling outliers is achieved by consistently employing the least trimmed squares (LTS) approach with a roughly estimated outlier rate in all three stages. If an optimal value of the outlier rate is preferred, we propose a hypothesis testing procedure to automatically estimate it. We present here our validations using four experiments, which include 1 leave-one-out experiment, 2 experiment on evaluating the present approach for handling pathology, 3 experiment on evaluating the present approach for handling outliers, and 4 experiment on reconstructing surface models of seven dry cadaver femurs using clinically relevant data without noise and with noise added. Our validation results demonstrate the robust performance of the present approach in handling outliers, pathology, and noise. An average 95-percentile error of 1.7-2.3 mm was found when the present approach was used to reconstruct surface models of the cadaver femurs from sparse-point data with noise added.
Resumo:
In this dissertation, the problem of creating effective large scale Adaptive Optics (AO) systems control algorithms for the new generation of giant optical telescopes is addressed. The effectiveness of AO control algorithms is evaluated in several respects, such as computational complexity, compensation error rejection and robustness, i.e. reasonable insensitivity to the system imperfections. The results of this research are summarized as follows: 1. Robustness study of Sparse Minimum Variance Pseudo Open Loop Controller (POLC) for multi-conjugate adaptive optics (MCAO). The AO system model that accounts for various system errors has been developed and applied to check the stability and performance of the POLC algorithm, which is one of the most promising approaches for the future AO systems control. It has been shown through numerous simulations that, despite the initial assumption that the exact system knowledge is necessary for the POLC algorithm to work, it is highly robust against various system errors. 2. Predictive Kalman Filter (KF) and Minimum Variance (MV) control algorithms for MCAO. The limiting performance of the non-dynamic Minimum Variance and dynamic KF-based phase estimation algorithms for MCAO has been evaluated by doing Monte-Carlo simulations. The validity of simple near-Markov autoregressive phase dynamics model has been tested and its adequate ability to predict the turbulence phase has been demonstrated both for single- and multiconjugate AO. It has also been shown that there is no performance improvement gained from the use of the more complicated KF approach in comparison to the much simpler MV algorithm in the case of MCAO. 3. Sparse predictive Minimum Variance control algorithm for MCAO. The temporal prediction stage has been added to the non-dynamic MV control algorithm in such a way that no additional computational burden is introduced. It has been confirmed through simulations that the use of phase prediction makes it possible to significantly reduce the system sampling rate and thus overall computational complexity while both maintaining the system stable and effectively compensating for the measurement and control latencies.
Resumo:
This dissertation presents the competitive control methodologies for small-scale power system (SSPS). A SSPS is a collection of sources and loads that shares a common network which can be isolated during terrestrial disturbances. Micro-grids, naval ship electric power systems (NSEPS), aircraft power systems and telecommunication system power systems are typical examples of SSPS. The analysis and development of control systems for small-scale power systems (SSPS) lacks a defined slack bus. In addition, a change of a load or source will influence the real time system parameters of the system. Therefore, the control system should provide the required flexibility, to ensure operation as a single aggregated system. In most of the cases of a SSPS the sources and loads must be equipped with power electronic interfaces which can be modeled as a dynamic controllable quantity. The mathematical formulation of the micro-grid is carried out with the help of game theory, optimal control and fundamental theory of electrical power systems. Then the micro-grid can be viewed as a dynamical multi-objective optimization problem with nonlinear objectives and variables. Basically detailed analysis was done with optimal solutions with regards to start up transient modeling, bus selection modeling and level of communication within the micro-grids. In each approach a detail mathematical model is formed to observe the system response. The differential game theoretic approach was also used for modeling and optimization of startup transients. The startup transient controller was implemented with open loop, PI and feedback control methodologies. Then the hardware implementation was carried out to validate the theoretical results. The proposed game theoretic controller shows higher performances over traditional the PI controller during startup. In addition, the optimal transient surface is necessary while implementing the feedback controller for startup transient. Further, the experimental results are in agreement with the theoretical simulation. The bus selection and team communication was modeled with discrete and continuous game theory models. Although players have multiple choices, this controller is capable of choosing the optimum bus. Next the team communication structures are able to optimize the players’ Nash equilibrium point. All mathematical models are based on the local information of the load or source. As a result, these models are the keys to developing accurate distributed controllers.
Resumo:
As continued global funding and coordination are allocated toward the improvement of access to safe sources of drinking water, alternative solutions may be necessary to expand implementation to remote communities. This report evaluates two technologies used in a small water distribution system in a mountainous region of Panama; solar powered pumping and flow-reducing discs. The two parts of the system function independently, but were both chosen for their ability to mitigate unique issues in the community. The design program NeatWork and flow-reducing discs were evaluated because they are tools taught to Peace Corps Volunteers in Panama. Even when ample water is available, mountainous terrains affect the pressure available throughout a water distribution system. Since the static head in the system only varies with the height of water in the tank, frictional losses from pipes and fittings must be exploited to balance out the inequalities caused by the uneven terrain. Reducing the maximum allowable flow to connections through the installation of flow-reducing discs can help to retain enough residual pressure in the main distribution lines to provide reliable service to all connections. NeatWork was calibrated to measured flow rates by changing the orifice coefficient (θ), resulting in a value of 0.68, which is 10-15% higher than typical values for manufactured flow-reducing discs. NeatWork was used to model various system configurations to determine if a single-sized flow-reducing disc could provide equitable flow rates throughout an entire system. There is a strong correlation between the optimum single-sized flow- reducing disc and the average elevation change throughout a water distribution system; the larger the elevation change across the system, the smaller the recommended uniform orifice size. Renewable energy can jump the infrastructure gap and provide basic services at a fraction of the cost and time required to install transmission lines. Methods for the assessment of solar powered pumping systems as a means for rural water supply are presented and assessed. It was determined that manufacturer provided product specifications can be used to appropriately design a solar pumping system, but care must be taken to ensure that sufficient water can be provided to the system despite variations in solar intensity.
Resumo:
Wind energy has been one of the most growing sectors of the nation’s renewable energy portfolio for the past decade, and the same tendency is being projected for the upcoming years given the aggressive governmental policies for the reduction of fossil fuel dependency. Great technological expectation and outstanding commercial penetration has shown the so called Horizontal Axis Wind Turbines (HAWT) technologies. Given its great acceptance, size evolution of wind turbines over time has increased exponentially. However, safety and economical concerns have emerged as a result of the newly design tendencies for massive scale wind turbine structures presenting high slenderness ratios and complex shapes, typically located in remote areas (e.g. offshore wind farms). In this regard, safety operation requires not only having first-hand information regarding actual structural dynamic conditions under aerodynamic action, but also a deep understanding of the environmental factors in which these multibody rotating structures operate. Given the cyclo-stochastic patterns of the wind loading exerting pressure on a HAWT, a probabilistic framework is appropriate to characterize the risk of failure in terms of resistance and serviceability conditions, at any given time. Furthermore, sources of uncertainty such as material imperfections, buffeting and flutter, aeroelastic damping, gyroscopic effects, turbulence, among others, have pleaded for the use of a more sophisticated mathematical framework that could properly handle all these sources of indetermination. The attainable modeling complexity that arises as a result of these characterizations demands a data-driven experimental validation methodology to calibrate and corroborate the model. For this aim, System Identification (SI) techniques offer a spectrum of well-established numerical methods appropriated for stationary, deterministic, and data-driven numerical schemes, capable of predicting actual dynamic states (eigenrealizations) of traditional time-invariant dynamic systems. As a consequence, it is proposed a modified data-driven SI metric based on the so called Subspace Realization Theory, now adapted for stochastic non-stationary and timevarying systems, as is the case of HAWT’s complex aerodynamics. Simultaneously, this investigation explores the characterization of the turbine loading and response envelopes for critical failure modes of the structural components the wind turbine is made of. In the long run, both aerodynamic framework (theoretical model) and system identification (experimental model) will be merged in a numerical engine formulated as a search algorithm for model updating, also known as Adaptive Simulated Annealing (ASA) process. This iterative engine is based on a set of function minimizations computed by a metric called Modal Assurance Criterion (MAC). In summary, the Thesis is composed of four major parts: (1) development of an analytical aerodynamic framework that predicts interacted wind-structure stochastic loads on wind turbine components; (2) development of a novel tapered-swept-corved Spinning Finite Element (SFE) that includes dampedgyroscopic effects and axial-flexural-torsional coupling; (3) a novel data-driven structural health monitoring (SHM) algorithm via stochastic subspace identification methods; and (4) a numerical search (optimization) engine based on ASA and MAC capable of updating the SFE aerodynamic model.
Resumo:
Software must be constantly adapted to changing requirements. The time scale, abstraction level and granularity of adaptations may vary from short-term, fine-grained adaptation to long-term, coarse-grained evolution. Fine-grained, dynamic and context-dependent adaptations can be particularly difficult to realize in long-lived, large-scale software systems. We argue that, in order to effectively and efficiently deploy such changes, adaptive applications must be built on an infrastructure that is not just model-driven, but is both model-centric and context-aware. Specifically, this means that high-level, causally-connected models of the application and the software infrastructure itself should be available at run-time, and that changes may need to be scoped to the run-time execution context. We first review the dimensions of software adaptation and evolution, and then we show how model-centric design can address the adaptation needs of a variety of applications that span these dimensions. We demonstrate through concrete examples how model-centric and context-aware designs work at the level of application interface, programming language and runtime. We then propose a research agenda for a model-centric development environment that supports dynamic software adaptation and evolution.
Resumo:
BACKGROUND: Alveolar echinococcosis (AE) is a severe helminth disease affecting humans, which is caused by the fox tapeworm Echinococcus multilocularis. AE represents a serious public health issue in larger regions of China, Siberia, and other regions in Asia. In Europe, a significant increase in prevalence since the 1990s is not only affecting the historically documented endemic area north of the Alps but more recently also neighbouring regions previously not known to be endemic. The genetic diversity of the parasite population and respective distribution in Europe have now been investigated in view of generating a fine-tuned map of parasite variants occurring in Europe. This approach may serve as a model to study the parasite at a worldwide level. METHODOLOGY/PRINCIPAL FINDINGS: The genetic diversity of E. multilocularis was assessed based upon the tandemly repeated microsatellite marker EmsB in association with matching fox host geographical positions. Our study demonstrated a higher genetic diversity in the endemic areas north of the Alps when compared to other areas. CONCLUSIONS/SIGNIFICANCE: The study of the spatial distribution of E. multilocularis in Europe, based on 32 genetic clusters, suggests that Europe can be considered as a unique global focus of E. multilocularis, which can be schematically drawn as a central core located in Switzerland and Jura Swabe flanked by neighbouring regions where the parasite exhibits a lower genetic diversity. The transmission of the parasite into peripheral regions is governed by a "mainland-island" system. Moreover, the presence of similar genetic profiles in both zones indicated a founder event.
Resumo:
Breaking synoptic-scale Rossby waves (RWB) at the tropopause level are central to the daily weather evolution in the extratropics and the subtropics. RWB leads to pronounced meridional transport of heat, moisture, momentum, and chemical constituents. RWB events are manifest as elongated and narrow structures in the tropopause-level potential vorticity (PV) field. A feature-based validation approach is used to assess the representation of Northern Hemisphere RWB in present-day climate simulations carried out with the ECHAM5-HAM climate model at three different resolutions (T42L19, T63L31, and T106L31) against the ERA-40 reanalysis data set. An objective identification algorithm extracts RWB events from the isentropic PV field and allows quantifying the frequency of occurrence of RWB. The biases in the frequency of RWB are then compared to biases in the time mean tropopause-level jet wind speeds. The ECHAM5-HAM model captures the location of the RWB frequency maxima in the Northern Hemisphere at all three resolutions. However, at coarse resolution (T42L19) the overall frequency of RWB, i.e. the frequency averaged over all seasons and the entire hemisphere, is underestimated by 28%.The higher-resolution simulations capture the overall frequency of RWB much better, with a minor difference between T63L31 and T106L31 (frequency errors of −3.5 and 6%, respectively). The number of large-size RWB events is significantly underestimated by the T42L19 experiment and well represented in the T106L31 simulation. On the local scale, however, significant differences to ERA-40 are found in the higher-resolution simulations. These differences are regionally confined and vary with the season. The most striking difference between T106L31 and ERA-40 is that ECHAM5-HAM overestimates the frequency of RWB in the subtropical Atlantic in all seasons except for spring. This bias maximum is accompanied by an equatorward extension of the subtropical westerlies.
Resumo:
Previous studies have highlighted the severity of detrimental effects for life on earth after an assumed regionally limited nuclear war. These effects are caused by climatic, chemical and radiative changes persisting for up to one decade. However, so far only a very limited number of climate model simulations have been performed, giving rise to the question how realistic previous computations have been. This study uses the coupled chemistry climate model (CCM) SOCOL, which belongs to a different family of CCMs than previously used, to investigate the consequences of such a hypothetical nuclear conflict. In accordance with previous studies, the present work assumes a scenario of a nuclear conflict between India and Pakistan, each applying 50 warheads with an individual blasting power of 15 kt ("Hiroshima size") against the major population centers, resulting in the emission of tiny soot particles, which are generated in the firestorms expected in the aftermath of the detonations. Substantial uncertainties related to the calculation of likely soot emissions, particularly concerning assumptions of target fuel loading and targeting of weapons, have been addressed by simulating several scenarios, with soot emissions ranging from 1 to 12 Tg. Their high absorptivity with respect to solar radiation leads to a rapid self-lofting of the soot particles into the strato- and mesosphere within a few days after emission, where they remain for several years. Consequently, the model suggests earth's surface temperatures to drop by several degrees Celsius due to the shielding of solar irradiance by the soot, indicating a major global cooling. In addition, there is a substantial reduction of precipitation lasting 5 to 10 yr after the conflict, depending on the magnitude of the initial soot release. Extreme cold spells associated with an increase in sea ice formation are found during Northern Hemisphere winter, which expose the continental land masses of North America and Eurasia to a cooling of several degrees. In the stratosphere, the strong heating leads to an acceleration of catalytic ozone loss and, consequently, to enhancements of UV radiation at the ground. In contrast to surface temperature and precipitation changes, which show a linear dependence to the soot burden, there is a saturation effect with respect to stratospheric ozone chemistry. Soot emissions of 5 Tg lead to an ozone column reduction of almost 50% in northern high latitudes, while emitting 12 Tg only increases ozone loss by a further 10%. In summary, this study, though using a different chemistry climate model, corroborates the previous investigations with respect to the atmospheric impacts. In addition to these persistent effects, the present study draws attention to episodically cold phases, which would likely add to the severity of human harm worldwide. The best insurance against such a catastrophic development would be the delegitimization of nuclear weapons.
Resumo:
Asteroid 4Vesta seems to be a major intact protoplanet, with a surface composition similar to that of the HED (howardite-eucrite-diogenite) meteorites. The southern hemisphere is dominated by a giant impact scar, but previous impact models have failed to reproduce the observed topography. The recent discovery that Vesta's southern hemisphere is dominated by two overlapping basins provides an opportunity to model Vesta's topography more accurately. Here we report three-dimensional simulations of Vesta's global evolution under two overlapping planet-scale collisions. We closely reproduce its observed shape, and provide maps of impact excavation and ejecta deposition. Spiral patterns observed in the younger basin Rheasilvia, about one billion years old, are attributed to Coriolis forces during crater collapse. Surface materials exposed in the north come from a depth of about 20kilometres, according to our models, whereas materials exposed inside the southern double-excavation come from depths of about 60-100kilometres. If Vesta began as a layered, completely differentiated protoplanet, then our model predicts large areas of pure diogenites and olivine-rich rocks. These are not seen, possibly implying that the outer 100kilometres or so of Vesta is composed mainly of a basaltic crust (eucrites) with ultramafic intrusions (diogenites).
Resumo:
Reproducing the characteristics and the functional responses of the blood-brain barrier (BBB) in vitro represents an important task for the research community, and would be a critical biotechnological breakthrough. Pharmaceutical and biotechnology industries provide strong demand for inexpensive and easy-to-handle in vitro BBB models to screen novel drug candidates. Recently, it was shown that canonical Wnt signaling is responsible for the induction of the BBB properties in the neonatal brain microvasculature in vivo. In the present study, following on from earlier observations, we have developed a novel model of the BBB in vitro that may be suitable for large scale screening assays. This model is based on immortalized endothelial cell lines derived from murine and human brain, with no need for co-culture with astrocytes. To maintain the BBB endothelial cell properties, the cell lines are cultured in the presence of Wnt3a or drugs that stabilize β-catenin, or they are infected with a transcriptionally active form of β-catenin. Upon these treatments, the cell lines maintain expression of BBB-specific markers, which results in elevated transendothelial electrical resistance and reduced cell permeability. Importantly, these properties are retained for several passages in culture, and they can be reproduced and maintained in different laboratories over time. We conclude that the brain-derived endothelial cell lines that we have investigated gain their specialized characteristics upon activation of the canonical Wnt pathway. This model may be thus suitable to test the BBB permeability to chemicals or large molecular weight proteins, transmigration of inflammatory cells, treatments with cytokines, and genetic manipulation.