977 resultados para SERIES MODELS


Relevância:

30.00% 30.00%

Publicador:

Resumo:

The African great lakes are of utmost importance for the local economy (fishing), as well as being essential to the survival of the local people. During the past decades, these lakes experienced fast changes in ecosystem structure and functioning, and their future evolution is a major concern. In this study, for the first time a set of one-dimensional lake models are evaluated for Lake Kivu (2.28°S; 28.98°E), East Africa. The unique limnology of this meromictic lake, with the importance of salinity and subsurface springs in a tropical high-altitude climate, presents a worthy challenge to the seven models involved in the Lake Model Intercomparison Project (LakeMIP). Meteorological observations from two automatic weather stations are used to drive the models, whereas a unique dataset, containing over 150 temperature profiles recorded since 2002, is used to assess the model’s performance. Simulations are performed over the freshwater layer only (60 m) and over the average lake depth (240 m), since salinity increases with depth below 60 m in Lake Kivu and some lake models do not account for the influence of salinity upon lake stratification. All models are able to reproduce the mixing seasonality in Lake Kivu, as well as the magnitude and seasonal cycle of the lake enthalpy change. Differences between the models can be ascribed to variations in the treatment of the radiative forcing and the computation of the turbulent heat fluxes. Fluctuations in wind velocity and solar radiation explain inter-annual variability of observed water column temperatures. The good agreement between the deep simulations and the observed meromictic stratification also shows that a subset of models is able to account for the salinity- and geothermal-induced effects upon deep-water stratification. Finally, based on the strengths and weaknesses discerned in this study, an informed choice of a one-dimensional lake model for a given research purpose becomes possible.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Context. Planet formation models have been developed during the past years to try to reproduce what has been observed of both the solar system and the extrasolar planets. Some of these models have partially succeeded, but they focus on massive planets and, for the sake of simplicity, exclude planets belonging to planetary systems. However, more and more planets are now found in planetary systems. This tendency, which is a result of radial velocity, transit, and direct imaging surveys, seems to be even more pronounced for low-mass planets. These new observations require improving planet formation models, including new physics, and considering the formation of systems. Aims: In a recent series of papers, we have presented some improvements in the physics of our models, focussing in particular on the internal structure of forming planets, and on the computation of the excitation state of planetesimals and their resulting accretion rate. In this paper, we focus on the concurrent effect of the formation of more than one planet in the same protoplanetary disc and show the effect, in terms of architecture and composition of this multiplicity. Methods: We used an N-body calculation including collision detection to compute the orbital evolution of a planetary system. Moreover, we describe the effect of competition for accretion of gas and solids, as well as the effect of gravitational interactions between planets. Results: We show that the masses and semi-major axes of planets are modified by both the effect of competition and gravitational interactions. We also present the effect of the assumed number of forming planets in the same system (a free parameter of the model), as well as the effect of the inclination and eccentricity damping. We find that the fraction of ejected planets increases from nearly 0 to 8% as we change the number of embryos we seed the system with from 2 to 20 planetary embryos. Moreover, our calculations show that, when considering planets more massive than ~5 M⊕, simulations with 10 or 20 planetary embryos statistically give the same results in terms of mass function and period distribution.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The Atlantic subpolar gyre (SPG) is one of the main drivers of decadal climate variability in the North Atlantic. Here we analyze its dynamics in pre-industrial control simulations of 19 different comprehensive coupled climate models. The analysis is based on a recently proposed description of the SPG dynamics that found the circulation to be potentially bistable due to a positive feedback mechanism including salt transport and enhanced deep convection in the SPG center. We employ a statistical method to identify multiple equilibria in time series that are subject to strong noise and analyze composite fields to assess whether the bistability results from the hypothesized feedback mechanism. Because noise dominates the time series in most models, multiple circulation modes can unambiguously be detected in only six models. Four of these six models confirm that the intensification is caused by the positive feedback mechanism.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Objective: Processes occurring in the course of psychotherapy are characterized by the simple fact that they unfold in time and that the multiple factors engaged in change processes vary highly between individuals (idiographic phenomena). Previous research, however, has neglected the temporal perspective by its traditional focus on static phenomena, which were mainly assessed at the group level (nomothetic phenomena). To support a temporal approach, the authors introduce time-series panel analysis (TSPA), a statistical methodology explicitly focusing on the quantification of temporal, session-to-session aspects of change in psychotherapy. TSPA-models are initially built at the level of individuals and are subsequently aggregated at the group level, thus allowing the exploration of prototypical models. Method: TSPA is based on vector auto-regression (VAR), an extension of univariate auto-regression models to multivariate time-series data. The application of TSPA is demonstrated in a sample of 87 outpatient psychotherapy patients who were monitored by postsession questionnaires. Prototypical mechanisms of change were derived from the aggregation of individual multivariate models of psychotherapy process. In a 2nd step, the associations between mechanisms of change (TSPA) and pre- to postsymptom change were explored. Results: TSPA allowed a prototypical process pattern to be identified, where patient's alliance and self-efficacy were linked by a temporal feedback-loop. Furthermore, therapist's stability over time in both mastery and clarification interventions was positively associated with better outcomes. Conclusions: TSPA is a statistical tool that sheds new light on temporal mechanisms of change. Through this approach, clinicians may gain insight into prototypical patterns of change in psychotherapy.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Within the context of exoplanetary atmospheres, we present a comprehensive linear analysis of forced, damped, magnetized shallow water systems, exploring the effects of dimensionality, geometry (Cartesian, pseudo-spherical, and spherical), rotation, magnetic tension, and hydrodynamic and magnetic sources of friction. Across a broad range of conditions, we find that the key governing equation for atmospheres and quantum harmonic oscillators are identical, even when forcing (stellar irradiation), sources of friction (molecular viscosity, Rayleigh drag, and magnetic drag), and magnetic tension are included. The global atmospheric structure is largely controlled by a single key parameter that involves the Rossby and Prandtl numbers. This near-universality breaks down when either molecular viscosity or magnetic drag acts non-uniformly across latitude or a poloidal magnetic field is present, suggesting that these effects will introduce qualitative changes to the familiar chevron-shaped feature witnessed in simulations of atmospheric circulation. We also find that hydrodynamic and magnetic sources of friction have dissimilar phase signatures and affect the flow in fundamentally different ways, implying that using Rayleigh drag to mimic magnetic drag is inaccurate. We exhaustively lay down the theoretical formalism (dispersion relations, governing equations, and time-dependent wave solutions) for a broad suite of models. In all situations, we derive the steady state of an atmosphere, which is relevant to interpreting infrared phase and eclipse maps of exoplanetary atmospheres. We elucidate a pinching effect that confines the atmospheric structure to be near the equator. Our suite of analytical models may be used to develop decisively physical intuition and as a reference point for three-dimensional magnetohydrodynamic simulations of atmospheric circulation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We present a comprehensive analytical study of radiative transfer using the method of moments and include the effects of non-isotropic scattering in the coherent limit. Within this unified formalism, we derive the governing equations and solutions describing two-stream radiative transfer (which approximates the passage of radiation as a pair of outgoing and incoming fluxes), flux-limited diffusion (which describes radiative transfer in the deep interior) and solutions for the temperature-pressure profiles. Generally, the problem is mathematically under-determined unless a set of closures (Eddington coefficients) is specified. We demonstrate that the hemispheric (or hemi-isotropic) closure naturally derives from the radiative transfer equation if energy conservation is obeyed, while the Eddington closure produces spurious enhancements of both reflected light and thermal emission. We concoct recipes for implementing two-stream radiative transfer in stand-alone numerical calculations and general circulation models. We use our two-stream solutions to construct toy models of the runaway greenhouse effect. We present a new solution for temperature-pressure profiles with a non-constant optical opacity and elucidate the effects of non-isotropic scattering in the optical and infrared. We derive generalized expressions for the spherical and Bond albedos and the photon deposition depth. We demonstrate that the value of the optical depth corresponding to the photosphere is not always 2/3 (Milne's solution) and depends on a combination of stellar irradiation, internal heat and the properties of scattering both in optical and infrared. Finally, we derive generalized expressions for the total, net, outgoing and incoming fluxes in the convective regime.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Mathematical models of disease progression predict disease outcomes and are useful epidemiological tools for planners and evaluators of health interventions. The R package gems is a tool that simulates disease progression in patients and predicts the effect of different interventions on patient outcome. Disease progression is represented by a series of events (e.g., diagnosis, treatment and death), displayed in a directed acyclic graph. The vertices correspond to disease states and the directed edges represent events. The package gems allows simulations based on a generalized multistate model that can be described by a directed acyclic graph with continuous transition-specific hazard functions. The user can specify an arbitrary hazard function and its parameters. The model includes parameter uncertainty, does not need to be a Markov model, and may take the history of previous events into account. Applications are not limited to the medical field and extend to other areas where multistate simulation is of interest. We provide a technical explanation of the multistate models used by gems, explain the functions of gems and their arguments, and show a sample application.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Several lines of genetic, archeological and paleontological evidence suggest that anatomically modern humans (Homo sapiens) colonized the world in the last 60,000 years by a series of migrations originating from Africa (e.g. Liu et al., 2006; Handley et al., 2007; Prugnolle, Manica, and Balloux, 2005; Ramachandran et al. 2005; Li et al. 2008; Deshpande et al. 2009; Mellars, 2006a, b; Lahr and Foley, 1998; Gravel et al., 2011; Rasmussen et al., 2011). With the progress of ancient DNA analysis, it has been shown that archaic humans hybridized with modern humans outside Africa. Recent direct analyses of fossil nuclear DNA have revealed that 1–4 percent of the genome of Eurasian has been likely introgressed by Neanderthal genes (Green et al., 2010; Reich et al., 2010; Vernot and Akey, 2014; Sankararaman et al., 2014; Prufer et al., 2014; Wall et al., 2013), with Papua New Guineans and Australians showing even larger levels of admixture with Denisovans (Reich et al., 2010; Skoglund and Jakobsson, 2011; Reich et al., 2011; Rasmussen et al., 2011). It thus appears that the past history of our species has been more complex than previously anticipated (Alves et al., 2012), and that modern humans hybridized several times with local hominins during their expansion out of Africa, but the exact mode, time and location of these hybridizations remain to be clarifi ed (Ibid.; Wall et al., 2013). In this context, we review here a general model of admixture during range expansion, which lead to some predictions about expected patterns of introgression that are relevant to modern human evolution.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This study compares gridded European seasonal series of surface air temperature (SAT) and precipitation (PRE) reconstructions with a regional climate simulation over the period 1500–1990. The area is analysed separately for nine subareas that represent the majority of the climate diversity in the European sector. In their spatial structure, an overall good agreement is found between the reconstructed and simulated climate features across Europe, supporting consistency in both products. Systematic biases between both data sets can be explained by a priori known deficiencies in the simulation. Simulations and reconstructions, however, largely differ in the temporal evolution of past climate for European subregions. In particular, the simulated anomalies during the Maunder and Dalton minima show stronger response to changes in the external forcings than recorded in the reconstructions. Although this disagreement is to some extent expected given the prominent role of internal variability in the evolution of regional temperature and precipitation, a certain degree of agreement is a priori expected in variables directly affected by external forcings. In this sense, the inability of the model to reproduce a warm period similar to that recorded for the winters during the first decades of the 18th century in the reconstructions is indicative of fundamental limitations in the simulation that preclude reproducing exceptionally anomalous conditions. Despite these limitations, the simulated climate is a physically consistent data set, which can be used as a benchmark to analyse the consistency and limitations of gridded reconstructions of different variables. A comparison of the leading modes of SAT and PRE variability indicates that reconstructions are too simplistic, especially for precipitation, which is associated with the linear statistical techniques used to generate the reconstructions. The analysis of the co-variability between sea level pressure (SLP) and SAT and PRE in the simulation yields a result which resembles the canonical co-variability recorded in the observations for the 20th century. However, the same analysis for reconstructions exhibits anomalously low correlations, which points towards a lack of dynamical consistency between independent reconstructions.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Effects of conspecific neighbours on survival and growth of trees have been found to be related to species abundance. Both positive and negative relationships may explain observed abundance patterns. Surprisingly, it is rarely tested whether such relationships could be biased or even spurious due to transforming neighbourhood variables or influences of spatial aggregation, distance decay of neighbour effects and standardization of effect sizes. To investigate potential biases, communities of 20 identical species were simulated with log-series abundances but without species-specific interactions. No relationship of conspecific neighbour effects on survival or growth with species abundance was expected. Survival and growth of individuals was simulated in random and aggregated spatial patterns using no, linear, or squared distance decay of neighbour effects. Regression coefficients of statistical neighbourhood models were unbiased and unrelated to species abundance. However, variation in the number of conspecific neighbours was positively or negatively related to species abundance depending on transformations of neighbourhood variables, spatial pattern and distance decay. Consequently, effect sizes and standardized regression coefficients, often used in model fitting across large numbers of species, were also positively or negatively related to species abundance depending on transformation of neighbourhood variables, spatial pattern and distance decay. Tests using randomized tree positions and identities provide the best benchmarks by which to critically evaluate relationships of effect sizes or standardized regression coefficients with tree species abundance. This will better guard against potential misinterpretations.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

PURPOSE In the present case series, the authors report on seven cases of erosively worn dentitions (98 posterior teeth) which were treated with direct resin composite. MATERIALS AND METHODS In all cases, both arches were restored by using the so-called stamp technique. All patients were treated with standardized materials and protocols. Prior to treatment, a waxup was made on die-cast models to build up the loss of occlusion as well as ensure the optimal future anatomy and function of the eroded teeth to be restored. During treatment, teeth were restored by using templates of silicone (ie, two "stamps," one on the vestibular, one on the oral aspect of each tooth), which were filled with resin composite in order to transfer the planned, future restoration (ie, in the shape of the waxup) from the extra- to the intraoral situation. Baseline examinations were performed in all patients after treatment, and photographs as well as radiographs were taken. To evaluate the outcome, the modified United States Public Health Service criteria (USPHS) were used. RESULTS The patients were re-assessed after a mean observation time of 40 months (40.8 ± 7.2 months). The overall outcome of the restorations was good, and almost exclusively "Alpha" scores were given. Only the marginal integrity and the anatomical form received a "Charlie" score (10.2%) in two cases. CONCLUSION Direct resin composite restorations made with the stamp technique are a valuable treatment option for restoring erosively worn dentitions.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Life expectancy has consistently increased over the last 150 years due to improvements in nutrition, medicine, and public health. Several studies found that in many developed countries, life expectancy continued to rise following a nearly linear trend, which was contrary to a common belief that the rate of improvement in life expectancy would decelerate and was fit with an S-shaped curve. Using samples of countries that exhibited a wide range of economic development levels, we explored the change in life expectancy over time by employing both nonlinear and linear models. We then observed if there were any significant differences in estimates between linear models, assuming an auto-correlated error structure. When data did not have a sigmoidal shape, nonlinear growth models sometimes failed to provide meaningful parameter estimates. The existence of an inflection point and asymptotes in the growth models made them inflexible with life expectancy data. In linear models, there was no significant difference in the life expectancy growth rate and future estimates between ordinary least squares (OLS) and generalized least squares (GLS). However, the generalized least squares model was more robust because the data involved time-series variables and residuals were positively correlated. ^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The first manuscript, entitled "Time-Series Analysis as Input for Clinical Predictive Modeling: Modeling Cardiac Arrest in a Pediatric ICU" lays out the theoretical background for the project. There are several core concepts presented in this paper. First, traditional multivariate models (where each variable is represented by only one value) provide single point-in-time snapshots of patient status: they are incapable of characterizing deterioration. Since deterioration is consistently identified as a precursor to cardiac arrests, we maintain that the traditional multivariate paradigm is insufficient for predicting arrests. We identify time series analysis as a method capable of characterizing deterioration in an objective, mathematical fashion, and describe how to build a general foundation for predictive modeling using time series analysis results as latent variables. Building a solid foundation for any given modeling task involves addressing a number of issues during the design phase. These include selecting the proper candidate features on which to base the model, and selecting the most appropriate tool to measure them. We also identified several unique design issues that are introduced when time series data elements are added to the set of candidate features. One such issue is in defining the duration and resolution of time series elements required to sufficiently characterize the time series phenomena being considered as candidate features for the predictive model. Once the duration and resolution are established, there must also be explicit mathematical or statistical operations that produce the time series analysis result to be used as a latent candidate feature. In synthesizing the comprehensive framework for building a predictive model based on time series data elements, we identified at least four classes of data that can be used in the model design. The first two classes are shared with traditional multivariate models: multivariate data and clinical latent features. Multivariate data is represented by the standard one value per variable paradigm and is widely employed in a host of clinical models and tools. These are often represented by a number present in a given cell of a table. Clinical latent features derived, rather than directly measured, data elements that more accurately represent a particular clinical phenomenon than any of the directly measured data elements in isolation. The second two classes are unique to the time series data elements. The first of these is the raw data elements. These are represented by multiple values per variable, and constitute the measured observations that are typically available to end users when they review time series data. These are often represented as dots on a graph. The final class of data results from performing time series analysis. This class of data represents the fundamental concept on which our hypothesis is based. The specific statistical or mathematical operations are up to the modeler to determine, but we generally recommend that a variety of analyses be performed in order to maximize the likelihood that a representation of the time series data elements is produced that is able to distinguish between two or more classes of outcomes. The second manuscript, entitled "Building Clinical Prediction Models Using Time Series Data: Modeling Cardiac Arrest in a Pediatric ICU" provides a detailed description, start to finish, of the methods required to prepare the data, build, and validate a predictive model that uses the time series data elements determined in the first paper. One of the fundamental tenets of the second paper is that manual implementations of time series based models are unfeasible due to the relatively large number of data elements and the complexity of preprocessing that must occur before data can be presented to the model. Each of the seventeen steps is analyzed from the perspective of how it may be automated, when necessary. We identify the general objectives and available strategies of each of the steps, and we present our rationale for choosing a specific strategy for each step in the case of predicting cardiac arrest in a pediatric intensive care unit. Another issue brought to light by the second paper is that the individual steps required to use time series data for predictive modeling are more numerous and more complex than those used for modeling with traditional multivariate data. Even after complexities attributable to the design phase (addressed in our first paper) have been accounted for, the management and manipulation of the time series elements (the preprocessing steps in particular) are issues that are not present in a traditional multivariate modeling paradigm. In our methods, we present the issues that arise from the time series data elements: defining a reference time; imputing and reducing time series data in order to conform to a predefined structure that was specified during the design phase; and normalizing variable families rather than individual variable instances. The final manuscript, entitled: "Using Time-Series Analysis to Predict Cardiac Arrest in a Pediatric Intensive Care Unit" presents the results that were obtained by applying the theoretical construct and its associated methods (detailed in the first two papers) to the case of cardiac arrest prediction in a pediatric intensive care unit. Our results showed that utilizing the trend analysis from the time series data elements reduced the number of classification errors by 73%. The area under the Receiver Operating Characteristic curve increased from a baseline of 87% to 98% by including the trend analysis. In addition to the performance measures, we were also able to demonstrate that adding raw time series data elements without their associated trend analyses improved classification accuracy as compared to the baseline multivariate model, but diminished classification accuracy as compared to when just the trend analysis features were added (ie, without adding the raw time series data elements). We believe this phenomenon was largely attributable to overfitting, which is known to increase as the ratio of candidate features to class examples rises. Furthermore, although we employed several feature reduction strategies to counteract the overfitting problem, they failed to improve the performance beyond that which was achieved by exclusion of the raw time series elements. Finally, our data demonstrated that pulse oximetry and systolic blood pressure readings tend to start diminishing about 10-20 minutes before an arrest, whereas heart rates tend to diminish rapidly less than 5 minutes before an arrest.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Changes of glaciers and snow cover in polar regions affect a wide range of physical and ecosystem processes on land and in the adjacent marine environment. In this study, we investigate the potential of 11-day repeat high-resolution satellite image time series from the TerraSAR-X mission to derive glaciological and hydrological parameters on King George Island, Antarctica during the period Oct/25/2010 to Apr/19/2011. The spatial pattern and temporal evolution of snow cover extent on ice-free areas can be monitored using multi-temporal coherence images. SAR coherence is used to map glacier extent of land terminating glaciers with an average accuracy of 25 m. Multi-temporal SAR color composites identify the position of the late summer snow line at about 220 m above sea level. Glacier surface velocities are obtained from intensity feature-tracking. Surface velocities near the calving front of Fourcade Glacier were up to 1.8 ± 0.01 m/d. Using an intercept theorem based on fundamental geometric principles together with differential GPS field measurements, the ice discharge of Fourcade Glacier was estimated to 20700 ± 5500 m**3/d (corresponding to ~19 ± 5 kt/d). The rapidly changing surface conditions on King George Island and the lack of high-resolution digital elevation models for the region remain restrictions for the applicability of SAR data and the precision of derived products.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Mid-Miocene pelagic sedimentary sections can be correlated using intermediate and high resolution oxygen and carbon isotopic records of benthic foraminifera. Precision of a few tens of thousands of years is readily achievable at sites with high sedimentation rates, for example, Deep Sea Drilling Project sites 289 and 574. The mid-Miocene carbon isotope records are characterized by an interval of high d13C values between 17 and 13.5 Ma (the Monterey Excursion of Vincent and Berger 1985) upon which are superimposed a series of periodic or quasi-periodic fluctuations in d13C values. These fluctuations have a period of approximately 440 kyr, suggestive of the 413 kyr cycle predicted by Milankovitch theory. Vincent and Berger proposed that the Monterey Excursion was the result of increased organic carbon burial in continental margins sediments. The increased d13C values (called 13C maxima) superimposed on the generally high mid-Miocene signal coincide with increases in d18O values suggesting that periods of cooling and/or ice buildup were associated with exceptionally rapid burial of organic carbon and lowered atmospheric CO2 levels. It is likely that during the Monterey Excursion the ocean/atmosphere system became progressively more sensitive to small changes in insolation, ultimately leading to major cooling of deep water and expansion of continental ice. We have assigned an absolute chronology, based on biostratigraphic and magneto-biostratigraphic datum levels, to the isotope stratigraphy and have used that chronology to correlate unconformities, seismic reflectors, carbonate minima, and dissolution intervals. Intervals of sediment containing 13C maxima are usually better preserved than the overlying and underlying sediments, indicating that the d13C values of TCO2 in deep water and the corrosiveness of seawater are inversely correlated. This again suggests that the 13C maxima were associated with rapid burial of organic carbon and reduced levels of atmospheric CO2. The absolute chronology we have assigned to the isotopic record indicates that the major mid-Miocene deepwater cooling/ice volume expansion took 2 m.y. and was not abrupt as had been reported previously. The cooling appears abrupt at many sites because the interval is characterized by a number of dissolution intervals. The cooling was not monotonic, and the 2 m.y. interval included an episode of especially rapid cooling as well as a brief return to warmer conditions before the final phase of the cooling period. The increase in d18O values of benthic foraminifera between 14.9 and 12.9 Ma was greatest at deeper water sites and at sites closest to Antarctica. The data suggest that the d18O value of seawater increased by no more than about 1.1 per mil during this interval and that the remainder of the change in benthic d18O values resulted from cooling in Antarctic regions of deepwater formation. Equatorial planktonic foraminifera from sites 237 and 289 exhibit a series of 0.4 per mil steplike increases in d13C values. Only one of these increases in planktonic d13C is correlated with any of the features in the mid-Miocene benthic carbon isotope record.