944 resultados para Hazard-Based Models


Relevância:

80.00% 80.00%

Publicador:

Resumo:

The demands of developing modern, highly dynamic applications have led to an increasing interest in dynamic programming languages and mechanisms. Not only applications must evolve over time, but the object models themselves may need to be adapted to the requirements of different run-time contexts. Class-based models and prototype-based models, for example, may need to co-exist to meet the demands of dynamically evolving applications. Multi-dimensional dispatch, fine-grained and dynamic software composition, and run-time evolution of behaviour are further examples of diverse mechanisms which may need to co-exist in a dynamically evolving run-time environment How can we model the semantics of these highly dynamic features, yet still offer some reasonable safety guarantees? To this end we present an original calculus in which objects can adapt their behaviour at run-time to changing contexts. Both objects and environments are represented by first-class mappings between variables and values. Message sends are dynamically resolved to method calls. Variables may be dynamically bound, making it possible to model a variety of dynamic mechanisms within the same calculus. Despite the highly dynamic nature of the calculus, safety properties are assured by a type assignment system.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The demands of developing modern, highly dynamic applications have led to an increasing interest in dynamic programming languages and mechanisms. Not only must applications evolve over time, but the object models themselves may need to be adapted to the requirements of different run-time contexts. Class-based models and prototype-based models, for example, may need to co-exist to meet the demands of dynamically evolving applications. Multi-dimensional dispatch, fine-grained and dynamic software composition, and run-time evolution of behaviour are further examples of diverse mechanisms which may need to co-exist in a dynamically evolving run-time environment. How can we model the semantics of these highly dynamic features, yet still offer some reasonable safety guarantees? To this end we present an original calculus in which objects can adapt their behaviour at run-time. Both objects and environments are represented by first-class mappings between variables and values. Message sends are dynamically resolved to method calls. Variables may be dynamically bound, making it possible to model a variety of dynamic mechanisms within the same calculus. Despite the highly dynamic nature of the calculus, safety properties are assured by a type assignment system.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

OBJECTIVE To explore the levels and determinants of loss to follow-up (LTF) under universal lifelong antiretroviral therapy (ART) for pregnant and breastfeeding women ('Option B+') in Malawi. DESIGN, SETTING, AND PARTICIPANTS We examined retention in care, from the date of ART initiation up to 6 months, for women in the Option B+ program. We analysed nationwide facility-level data on women who started ART at 540 facilities (n = 21 939), as well as individual-level data on patients who started ART at 19 large facilities (n = 11 534). RESULTS Of the women who started ART under Option B+ (n = 21 939), 17% appeared to be lost to follow-up 6 months after ART initiation. Most losses occurred in the first 3 months of therapy. Option B+ patients who started therapy during pregnancy were five times more likely than women who started ART in WHO stage 3/4 or with a CD4 cell count 350 cells/μl or less, to never return after their initial clinic visit [odds ratio (OR) 5.0, 95% confidence interval (CI) 4.2-6.1]. Option B+ patients who started therapy while breastfeeding were twice as likely to miss their first follow-up visit (OR 2.2, 95% CI 1.8-2.8). LTF was highest in pregnant Option B+ patients who began ART at large clinics on the day they were diagnosed with HIV. LTF varied considerably between facilities, ranging from 0 to 58%. CONCLUSION Decreasing LTF will improve the effectiveness of the Option B+ approach. Tailored interventions, like community or family-based models of care could improve its effectiveness.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Spike timing dependent plasticity (STDP) is a phenomenon in which the precise timing of spikes affects the sign and magnitude of changes in synaptic strength. STDP is often interpreted as the comprehensive learning rule for a synapse - the "first law" of synaptic plasticity. This interpretation is made explicit in theoretical models in which the total plasticity produced by complex spike patterns results from a superposition of the effects of all spike pairs. Although such models are appealing for their simplicity, they can fail dramatically. For example, the measured single-spike learning rule between hippocampal CA3 and CA1 pyramidal neurons does not predict the existence of long-term potentiation one of the best-known forms of synaptic plasticity. Layers of complexity have been added to the basic STDP model to repair predictive failures, but they have been outstripped by experimental data. We propose an alternate first law: neural activity triggers changes in key biochemical intermediates, which act as a more direct trigger of plasticity mechanisms. One particularly successful model uses intracellular calcium as the intermediate and can account for many observed properties of bidirectional plasticity. In this formulation, STDP is not itself the basis for explaining other forms of plasticity, but is instead a consequence of changes in the biochemical intermediate, calcium. Eventually a mechanism-based framework for learning rules should include other messengers, discrete change at individual synapses, spread of plasticity among neighboring synapses, and priming of hidden processes that change a synapse's susceptibility to future change. Mechanism-based models provide a rich framework for the computational representation of synaptic plasticity.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

It has long been surmised that income inequality within a society negatively affects public health. However, more recent studies suggest there is no association, especially when analyzing small areas. This study aimed to evaluate the effect of income inequality on mortality in Switzerland using the Gini index on municipality level. The study population included all individuals >30 years at the 2000 Swiss census (N = 4,689,545) living in 2,740 municipalities with 35.5 million person-years of follow-up and 456,211 deaths over follow-up. Cox proportional hazard regression models were adjusted for age, gender, marital status, nationality, urbanization, and language region. Results were reported as hazard ratios (HR) with 95 % confidence intervals. The mean Gini index across all municipalities was 0.377 (standard deviation 0.062, range 0.202-0.785). Larger cities, high-income municipalities and tourist areas had higher Gini indices. Higher income inequality was consistently associated with lower mortality risk, except for death from external causes. Adjusting for sex, marital status, nationality, urbanization and language region only slightly attenuated effects. In fully adjusted models, hazards of all-cause mortality by increasing Gini index quintile were HR = 0.99 (0.98-1.00), HR = 0.98 (0.97-0.99), HR = 0.95 (0.94-0.96), HR = 0.91 (0.90-0.92) compared to the lowest quintile. The relationship of income inequality with mortality in Switzerland is contradictory to what has been found in other developed high-income countries. Our results challenge current beliefs about the effect of income inequality on mortality on small area level. Further investigation is required to expose the underlying relationship between income inequality and population health.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Periodic comets move around the Sun on elliptical orbits. As such comet 67P/Churyumov-Gerasimenko (hereafter 67P) spends a portion of time in the inner solar system where it is exposed to increased solar insolation. Therefore given the change in heliocentric distance, in case of 67P from aphelion at 5.68 AU to perihelion at ~1.24 AU, the comet’s activity—the production of neutral gas and dust—undergoes significant variations. As a consequence, during the inbound portion, the mass loading of the solar wind increases and extends to larger spatial scales. This paper investigates how this interaction changes the character of the plasma environment of the comet by means of multifluid MHD simulations. The multifluid MHD model is capable of separating the dynamics of the solar wind ions and the pick-up ions created through photoionization and electron impact ionization in the coma of the comet. We show how two of the major boundaries, the bow shock and the diamagnetic cavity, form and develop as the comet moves through the inner solar system. Likewise for 67P, although most likely shifted back in time with respect to perihelion passage, this process is reversed on the outbound portion of the orbit. The presented model herein is able to reproduce some of the key features previously only accessible to particle-based models that take full account of the ions’ gyration. The results shown herein are in decent agreement to these hybrid-type kinetic simulations.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The study of operations on representations of objects is well documented in the realm of spatial engineering. However, the mathematical structure and formal proof of these operational phenomena are not thoroughly explored. Other works have often focused on query-based models that seek to order classes and instances of objects in the form of semantic hierarchies or graphs. In some models, nodes of graphs represent objects and are connected by edges that represent different types of coarsening operators. This work, however, studies how the coarsening operator "simplification" can manipulate partitions of finite sets, independent from objects and their attributes. Partitions that are "simplified first have a collection of elements filtered (removed), and then the remaining partition is amalgamated (some sub-collections are unified). Simplification has many interesting mathematical properties. A finite composition of simplifications can also be accomplished with some single simplification. Also, if one partition is a simplification of the other, the simplified partition is defined to be less than the other partition according to the simp relation. This relation is shown to be a partial-order relation based on simplification. Collections of partitions can not only be proven to have a partial- order structure, but also have a lattice structure and are complete. In regard to a geographic information system (GIs), partitions related to subsets of attribute domains for objects are called views. Objects belong to different views based whether or not their attribute values lie in the underlying view domain. Given a particular view, objects with their attribute n-tuple codings contained in the view are part of the actualization set on views, and objects are labeled according to the particular subset of the view in which their coding lies. Though the scope of the work does not mainly focus on queries related directly to geographic objects, it provides verification for the existence of particular views in a system with this underlying structure. Given a finite attribute domain, one can say with mathematical certainty that different views of objects are partially ordered by simplification, and every collection of views has a greatest lower bound and least upper bound, which provides the validity for exploring queries in this regard.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

We investigated cross-sectional associations between intakes of zinc, magnesium, heme- and non heme iron, beta-carotene, vitamin C and vitamin E and inflammation and subclinical atherosclerosis in the Multi-Ethnic Study of Atherosclerosis (MESA). We also investigated prospective associations between those micronutrients and incident MetS, T2D and CVD. Participants between 45-84 years of age at baseline were followed between 2000 and 2007. Dietary intake was assessed at baseline using a 120-item food frequency questionnaire. Multivariable linear regression and Cox proportional hazard regression models were used to evaluate associations of interest. Dietary intakes of non-heme iron and Mg were inversely associated with tHcy concentrations (geometric means across quintiles: 9.11, 8.86, 8.74, 8.71, and 8.50 µmol/L for non-heme iron, and 9.20, 9.00, 8.65, 8.76, and 8.33 µmol/L for Mg; ptrends <0.001). Mg intake was inversely associated with high CC-IMT; odds ratio (95% CI) for extreme quintiles 0.76 (0.58, 1.01), ptrend: 0.002. Dietary Zn and heme-iron were positively associated with CRP (geometric means: 1.73, 1.75, 1.78, 1.88, and 1.96 mg/L for Zn and 1.72, 1.76, 1.83, 1.86, and 1.94 mg/L for heme-iron). In the prospective analysis, dietary vitamin E intake was inversely associated with incident MetS and with incident CVD (HR [CI] for extreme quintiles - MetS: 0.78 [0.62-0.97] ptrend=0.01; CVD: 0.69 [0.46-1.03]; ptrend =0.04). Intake of heme-iron from red meat and Zn from red meat, but not from other sources, were each positively associated with risk of CVD (HR [CI] - heme-iron from red meat: 1.65 [1.10-2.47] ptrend = 0.01; Zn from red meat: 1.51 [1.02 - 2.24] ptrend =0.01) and MetS (HR [CI] - heme-iron from red meat: 1.25 [0.99-1.56] ptrend =0.03; Zn from red meat: 1.29 [1.03-1.61]; ptrend = 0.04). All associations evaluated were similar across different strata of gender, race-ethnicity and alcohol intake. Most of the micronutrients investigated were not associated with the outcomes of interest in this multi-ethnic cohort. These observations do not provide consistent support for the hypothesized association of individual nutrients with inflammatory markers, MetS, T2D, or CVD. However, nutrients consumed in red meat, or consumption of red meat as a whole, may increase risk of MetS and CVD.^

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The first manuscript, entitled "Time-Series Analysis as Input for Clinical Predictive Modeling: Modeling Cardiac Arrest in a Pediatric ICU" lays out the theoretical background for the project. There are several core concepts presented in this paper. First, traditional multivariate models (where each variable is represented by only one value) provide single point-in-time snapshots of patient status: they are incapable of characterizing deterioration. Since deterioration is consistently identified as a precursor to cardiac arrests, we maintain that the traditional multivariate paradigm is insufficient for predicting arrests. We identify time series analysis as a method capable of characterizing deterioration in an objective, mathematical fashion, and describe how to build a general foundation for predictive modeling using time series analysis results as latent variables. Building a solid foundation for any given modeling task involves addressing a number of issues during the design phase. These include selecting the proper candidate features on which to base the model, and selecting the most appropriate tool to measure them. We also identified several unique design issues that are introduced when time series data elements are added to the set of candidate features. One such issue is in defining the duration and resolution of time series elements required to sufficiently characterize the time series phenomena being considered as candidate features for the predictive model. Once the duration and resolution are established, there must also be explicit mathematical or statistical operations that produce the time series analysis result to be used as a latent candidate feature. In synthesizing the comprehensive framework for building a predictive model based on time series data elements, we identified at least four classes of data that can be used in the model design. The first two classes are shared with traditional multivariate models: multivariate data and clinical latent features. Multivariate data is represented by the standard one value per variable paradigm and is widely employed in a host of clinical models and tools. These are often represented by a number present in a given cell of a table. Clinical latent features derived, rather than directly measured, data elements that more accurately represent a particular clinical phenomenon than any of the directly measured data elements in isolation. The second two classes are unique to the time series data elements. The first of these is the raw data elements. These are represented by multiple values per variable, and constitute the measured observations that are typically available to end users when they review time series data. These are often represented as dots on a graph. The final class of data results from performing time series analysis. This class of data represents the fundamental concept on which our hypothesis is based. The specific statistical or mathematical operations are up to the modeler to determine, but we generally recommend that a variety of analyses be performed in order to maximize the likelihood that a representation of the time series data elements is produced that is able to distinguish between two or more classes of outcomes. The second manuscript, entitled "Building Clinical Prediction Models Using Time Series Data: Modeling Cardiac Arrest in a Pediatric ICU" provides a detailed description, start to finish, of the methods required to prepare the data, build, and validate a predictive model that uses the time series data elements determined in the first paper. One of the fundamental tenets of the second paper is that manual implementations of time series based models are unfeasible due to the relatively large number of data elements and the complexity of preprocessing that must occur before data can be presented to the model. Each of the seventeen steps is analyzed from the perspective of how it may be automated, when necessary. We identify the general objectives and available strategies of each of the steps, and we present our rationale for choosing a specific strategy for each step in the case of predicting cardiac arrest in a pediatric intensive care unit. Another issue brought to light by the second paper is that the individual steps required to use time series data for predictive modeling are more numerous and more complex than those used for modeling with traditional multivariate data. Even after complexities attributable to the design phase (addressed in our first paper) have been accounted for, the management and manipulation of the time series elements (the preprocessing steps in particular) are issues that are not present in a traditional multivariate modeling paradigm. In our methods, we present the issues that arise from the time series data elements: defining a reference time; imputing and reducing time series data in order to conform to a predefined structure that was specified during the design phase; and normalizing variable families rather than individual variable instances. The final manuscript, entitled: "Using Time-Series Analysis to Predict Cardiac Arrest in a Pediatric Intensive Care Unit" presents the results that were obtained by applying the theoretical construct and its associated methods (detailed in the first two papers) to the case of cardiac arrest prediction in a pediatric intensive care unit. Our results showed that utilizing the trend analysis from the time series data elements reduced the number of classification errors by 73%. The area under the Receiver Operating Characteristic curve increased from a baseline of 87% to 98% by including the trend analysis. In addition to the performance measures, we were also able to demonstrate that adding raw time series data elements without their associated trend analyses improved classification accuracy as compared to the baseline multivariate model, but diminished classification accuracy as compared to when just the trend analysis features were added (ie, without adding the raw time series data elements). We believe this phenomenon was largely attributable to overfitting, which is known to increase as the ratio of candidate features to class examples rises. Furthermore, although we employed several feature reduction strategies to counteract the overfitting problem, they failed to improve the performance beyond that which was achieved by exclusion of the raw time series elements. Finally, our data demonstrated that pulse oximetry and systolic blood pressure readings tend to start diminishing about 10-20 minutes before an arrest, whereas heart rates tend to diminish rapidly less than 5 minutes before an arrest.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Many of the material models most frequently used for the numerical simulation of the behavior of concrete when subjected to high strain rates have been originally developed for the simulation of ballistic impact. Therefore, they are plasticity-based models in which the compressive behavior is modeled in a complex way, while their tensile failure criterion is of a rather simpler nature. As concrete elements usually fail in tensión when subjected to blast loading, available concrete material models for high strain rates may not represent accurately their real behavior. In this research work an experimental program of reinforced concrete fíat elements subjected to blast load is presented. Altogether four detonation tests are conducted, in which 12 slabs of two different concrete types are subjected to the same blast load. The results of the experimental program are then used for the development and adjustment of numerical tools needed in the modeling of concrete elements subjected to blast.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Semantic Sensor Web infrastructures use ontology-based models to represent the data that they manage; however, up to now, these ontological models do not allow representing all the characteristics of distributed, heterogeneous, and web-accessible sensor data. This paper describes a core ontological model for Semantic Sensor Web infrastructures that covers these characteristics and that has been built with a focus on reusability. This ontological model is composed of different modules that deal, on the one hand, with infrastructure data and, on the other hand, with data from a specific domain, that is, the coastal flood emergency planning domain. The paper also presents a set of guidelines, followed during the ontological model development, to satisfy a common set of requirements related to modelling domain-specific features of interest and properties. In addition, the paper includes the results obtained after an exhaustive evaluation of the developed ontologies along different aspects (i.e., vocabulary, syntax, structure, semantics, representation, and context).

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The mechanical behavior of granular materials has been traditionally approached through two theoretical and computational frameworks: macromechanics and micromechanics. Macromechanics focuses on continuum based models. In consequence it is assumed that the matter in the granular material is homogeneous and continuously distributed over its volume so that the smallest element cut from the body possesses the same physical properties as the body. In particular, it has some equivalent mechanical properties, represented by complex and non-linear constitutive relationships. Engineering problems are usually solved using computational methods such as FEM or FDM. On the other hand, micromechanics is the analysis of heterogeneous materials on the level of their individual constituents. In granular materials, if the properties of particles are known, a micromechanical approach can lead to a predictive response of the whole heterogeneous material. Two classes of numerical techniques can be differentiated: computational micromechanics, which consists on applying continuum mechanics on each of the phases of a representative volume element and then solving numerically the equations, and atomistic methods (DEM), which consist on applying rigid body dynamics together with interaction potentials to the particles. Statistical mechanics approaches arise between micro and macromechanics. It tries to state which the expected macroscopic properties of a granular system are, by starting from a micromechanical analysis of the features of the particles and the interactions. The main objective of this paper is to introduce this approach.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Carbon (C) and nitrogen (N) process-based models are important tools for estimating and reporting greenhouse gas emissions and changes in soil C stocks. There is a need for continuous evaluation, development and adaptation of these models to improve scientific understanding, national inventories and assessment of mitigation options across the world. To date, much of the information needed to describe different processes like transpiration, photosynthesis, plant growth and maintenance, above and below ground carbon dynamics, decomposition and nitrogen mineralization. In ecosystem models remains inaccessible to the wider community, being stored within model computer source code, or held internally by modelling teams. Here we describe the Global Research Alliance Modelling Platform (GRAMP), a web-based modelling platform to link researchers with appropriate datasets, models and training material. It will provide access to model source code and an interactive platform for researchers to form a consensus on existing methods, and to synthesize new ideas, which will help to advance progress in this area. The platform will eventually support a variety of models, but to trial the platform and test the architecture and functionality, it was piloted with variants of the DNDC model. The intention is to form a worldwide collaborative network (a virtual laboratory) via an interactive website with access to models and best practice guidelines; appropriate datasets for testing, calibrating and evaluating models; on-line tutorials and links to modelling and data provider research groups, and their associated publications. A graphical user interface has been designed to view the model development tree and access all of the above functions.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

One of the most promising areas in which probabilistic graphical models have shown an incipient activity is the field of heuristic optimization and, in particular, in Estimation of Distribution Algorithms. Due to their inherent parallelism, different research lines have been studied trying to improve Estimation of Distribution Algorithms from the point of view of execution time and/or accuracy. Among these proposals, we focus on the so-called distributed or island-based models. This approach defines several islands (algorithms instances) running independently and exchanging information with a given frequency. The information sent by the islands can be either a set of individuals or a probabilistic model. This paper presents a comparative study for a distributed univariate Estimation of Distribution Algorithm and a multivariate version, paying special attention to the comparison of two alternative methods for exchanging information, over a wide set of parameters and problems ? the standard benchmark developed for the IEEE Workshop on Evolutionary Algorithms and other Metaheuristics for Continuous Optimization Problems of the ISDA 2009 Conference. Several analyses from different points of view have been conducted to analyze both the influence of the parameters and the relationships between them including a characterization of the configurations according to their behavior on the proposed benchmark.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

To date, only few initiatives have been carried out in Spain in order to use mathematical models (e.g. DNDC, DayCent, FASSET y SIMSNIC) to estimate nitrogen (N) and carbon (C) dynamics as well as greenhouse gases (GHG) in Spanish agrosystems. Modeling at this level may allow to gain insight on both the complex relationships between biological and physicochemical processes, controlling the processes leading to GHG production and consumption in soils (e.g. nitrification, denitrification, decomposing, etc.), and the interactions between C and N cycles within the different components of the continuum plant-soil-environment. Additionally, these models can simulate the processes behind production, consumition and transport of GHG (e.g. nitrous oxide, N2O, and carbon dioxide, CO2) in the short and medium term and at different scales. Other sources of potential pollution from soils can be identified and quantified using these process-based models (e.g. NO3 y NH3).