931 resultados para Power series models
Resumo:
The report examines the relationship between day care institutions, schools and so called “parents unfamiliar to education” as well as the relationship between the institutions. With in Danish public and professional discourse concepts like parents unfamiliar to education are usually referring to environments, parents or families with either no or just very restricted experience of education except for the basic school (folkeskole). The “grand old man” of Danish educational research, Prof. Em. Erik Jørgen Hansen, defines the concept as follows: Parents who are distant from or not familiar with education, are parents without tradition of education and by that fact they are not able to contribute constructively in order to back up their own children during their education. Many teachers and pedagogues are not used to that term; they rather prefer concepts like “socially exposed” or “socially disadvantaged” parents or social classes or strata. The report does not only focus on parents who are not capable to support the school achievements of their children, since a low level of education is usually connected with social disadvantage. Such parents are often not capable of understanding and meeting the demands from side of the school when sending their children to school. They lack the competencies or the necessary competence of action. For the moment being much attention is done from side of the Ministries of Education and Social Affairs (recently renamed Ministry of Welfare) in order to create equal possibilities for all children. Many kinds of expertise (directions, counsels, researchers, etc.) have been more than eager to promote recommendations aiming at achieving the ambitious goal: 2015 95% of all young people should complement a full education (classes 10.-12.). Research results are pointing out the importance of increased participation of parents. In other word the agenda is set for ‘parents’ education’. It seems necessary to underline that Danish welfare policy has been changing rather radical. The classic model was an understanding of welfare as social assurance and/or as social distribution – based on social solidarity. The modern model looks like welfare as social service and/or social investment. This means that citizens are changing role – from user and/or citizen to consumer and/or investor. The Danish state is in correspondence with decisions taken by the government investing in a national future shaped by global competition. The new models of welfare – “service” and “investment” – imply severe changes in hitherto known concepts of family life, relationship between parents and children etc. As an example the investment model points at a new implementation of the relationship between social rights and the rights of freedom. The service model has demonstrated that weakness that the access to qualified services in the field of health or education is becoming more and more dependent of the private purchasing power. The weakness of the investment model is that it represents a sort of “The Winner takes it all” – since a political majority is enabled to make agendas in societal fields former protected by the tripartite power and the rights of freedom of the citizens. The outcome of the Danish development seems to be an establishment of a political governed public service industry which on one side are capable of competing on market conditions and on the other are able being governed by contracts. This represents a new form of close linking of politics, economy and professional work. Attempts of controlling education, pedagogy and thereby the population are not a recent invention. In European history we could easily point at several such experiments. The real news is the linking between political priorities and exercise of public activities by economic incentives. By defining visible goals for the public servants, by introducing measurement of achievements and effects, and by implementing a new wage policy depending on achievements and/or effects a new system of accountability is manufactured. The consequences are already perceptible. The government decides to do some special interventions concerning parents, children or youngsters, the public servants on municipality level are instructed to carry out their services by following a manual, and the parents are no longer protected by privacy. Protection of privacy and minority is no longer a valuable argumentation to prevent further interventions in people’s life (health, food, school, etc.). The citizens are becoming objects of investment, also implying that people are investing in their own health, education, and family. This means that investments in changes of life style and development of competences go hand in hand. The below mentioned programmes are conditioned by this shift.
Resumo:
Although interest in monopsonistic influences on labor market outcomes has revived in recent years, only a few empirical studies provide direct evidence on this topic. In this article, the authors analyze the effect of monopsony power on pay structure, using a direct measure of labor market thinness. The authors find evidence of monopsony power, as firms facing fewer local competitors offer lower wages to skilled labor and trainees, but not to unskilled labor. The findings have important implications for the economic theory of training, as most recent models assume monopsonistic pay-setting for skilled labor, but not for trainees.
Resumo:
The African great lakes are of utmost importance for the local economy (fishing), as well as being essential to the survival of the local people. During the past decades, these lakes experienced fast changes in ecosystem structure and functioning, and their future evolution is a major concern. In this study, for the first time a set of one-dimensional lake models are evaluated for Lake Kivu (2.28°S; 28.98°E), East Africa. The unique limnology of this meromictic lake, with the importance of salinity and subsurface springs in a tropical high-altitude climate, presents a worthy challenge to the seven models involved in the Lake Model Intercomparison Project (LakeMIP). Meteorological observations from two automatic weather stations are used to drive the models, whereas a unique dataset, containing over 150 temperature profiles recorded since 2002, is used to assess the model’s performance. Simulations are performed over the freshwater layer only (60 m) and over the average lake depth (240 m), since salinity increases with depth below 60 m in Lake Kivu and some lake models do not account for the influence of salinity upon lake stratification. All models are able to reproduce the mixing seasonality in Lake Kivu, as well as the magnitude and seasonal cycle of the lake enthalpy change. Differences between the models can be ascribed to variations in the treatment of the radiative forcing and the computation of the turbulent heat fluxes. Fluctuations in wind velocity and solar radiation explain inter-annual variability of observed water column temperatures. The good agreement between the deep simulations and the observed meromictic stratification also shows that a subset of models is able to account for the salinity- and geothermal-induced effects upon deep-water stratification. Finally, based on the strengths and weaknesses discerned in this study, an informed choice of a one-dimensional lake model for a given research purpose becomes possible.
Resumo:
Context. Planet formation models have been developed during the past years to try to reproduce what has been observed of both the solar system and the extrasolar planets. Some of these models have partially succeeded, but they focus on massive planets and, for the sake of simplicity, exclude planets belonging to planetary systems. However, more and more planets are now found in planetary systems. This tendency, which is a result of radial velocity, transit, and direct imaging surveys, seems to be even more pronounced for low-mass planets. These new observations require improving planet formation models, including new physics, and considering the formation of systems. Aims: In a recent series of papers, we have presented some improvements in the physics of our models, focussing in particular on the internal structure of forming planets, and on the computation of the excitation state of planetesimals and their resulting accretion rate. In this paper, we focus on the concurrent effect of the formation of more than one planet in the same protoplanetary disc and show the effect, in terms of architecture and composition of this multiplicity. Methods: We used an N-body calculation including collision detection to compute the orbital evolution of a planetary system. Moreover, we describe the effect of competition for accretion of gas and solids, as well as the effect of gravitational interactions between planets. Results: We show that the masses and semi-major axes of planets are modified by both the effect of competition and gravitational interactions. We also present the effect of the assumed number of forming planets in the same system (a free parameter of the model), as well as the effect of the inclination and eccentricity damping. We find that the fraction of ejected planets increases from nearly 0 to 8% as we change the number of embryos we seed the system with from 2 to 20 planetary embryos. Moreover, our calculations show that, when considering planets more massive than ~5 M⊕, simulations with 10 or 20 planetary embryos statistically give the same results in terms of mass function and period distribution.
Resumo:
The Atlantic subpolar gyre (SPG) is one of the main drivers of decadal climate variability in the North Atlantic. Here we analyze its dynamics in pre-industrial control simulations of 19 different comprehensive coupled climate models. The analysis is based on a recently proposed description of the SPG dynamics that found the circulation to be potentially bistable due to a positive feedback mechanism including salt transport and enhanced deep convection in the SPG center. We employ a statistical method to identify multiple equilibria in time series that are subject to strong noise and analyze composite fields to assess whether the bistability results from the hypothesized feedback mechanism. Because noise dominates the time series in most models, multiple circulation modes can unambiguously be detected in only six models. Four of these six models confirm that the intensification is caused by the positive feedback mechanism.
Resumo:
The use of group-randomized trials is particularly widespread in the evaluation of health care, educational, and screening strategies. Group-randomized trials represent a subset of a larger class of designs often labeled nested, hierarchical, or multilevel and are characterized by the randomization of intact social units or groups, rather than individuals. The application of random effects models to group-randomized trials requires the specification of fixed and random components of the model. The underlying assumption is usually that these random components are normally distributed. This research is intended to determine if the Type I error rate and power are affected when the assumption of normality for the random component representing the group effect is violated. ^ In this study, simulated data are used to examine the Type I error rate, power, bias and mean squared error of the estimates of the fixed effect and the observed intraclass correlation coefficient (ICC) when the random component representing the group effect possess distributions with non-normal characteristics, such as heavy tails or severe skewness. The simulated data are generated with various characteristics (e.g. number of schools per condition, number of students per school, and several within school ICCs) observed in most small, school-based, group-randomized trials. The analysis is carried out using SAS PROC MIXED, Version 6.12, with random effects specified in a random statement and restricted maximum likelihood (REML) estimation specified. The results from the non-normally distributed data are compared to the results obtained from the analysis of data with similar design characteristics but normally distributed random effects. ^ The results suggest that the violation of the normality assumption for the group component by a skewed or heavy-tailed distribution does not appear to influence the estimation of the fixed effect, Type I error, and power. Negative biases were detected when estimating the sample ICC and dramatically increased in magnitude as the true ICC increased. These biases were not as pronounced when the true ICC was within the range observed in most group-randomized trials (i.e. 0.00 to 0.05). The normally distributed group effect also resulted in bias ICC estimates when the true ICC was greater than 0.05. However, this may be a result of higher correlation within the data. ^
Resumo:
Tree-ring series were collected for radiocarbon analyses from the vicinity of Paks nuclear power plant (NPP) and a background area (Dunaföldvár) for a 10-yr period (2000–2009). Samples of holocellulose were prepared from the wood and converted to graphite for accelerator mass spectrometry (AMS) 14C measurement using the MICADAS at ETH Zürich. The 14C concentration data from these tree rings was compared to the background tree rings for each year. The global decreasing trend of atmospheric 14C activity concentration was observed in the annual tree rings both in the background area and in the area of the NPP. As an average of the past 10 yr, the excess 14C emitted by the pressurized-water reactor (PWR) NPP to the atmosphere shows only a slight systematic excess (~6‰) 14C in the annual rings. The highest 14C excess was 13‰ (in 2006); however, years with the same 14C level as the background were quite frequent in the tree-ring series.
Resumo:
Objective: Processes occurring in the course of psychotherapy are characterized by the simple fact that they unfold in time and that the multiple factors engaged in change processes vary highly between individuals (idiographic phenomena). Previous research, however, has neglected the temporal perspective by its traditional focus on static phenomena, which were mainly assessed at the group level (nomothetic phenomena). To support a temporal approach, the authors introduce time-series panel analysis (TSPA), a statistical methodology explicitly focusing on the quantification of temporal, session-to-session aspects of change in psychotherapy. TSPA-models are initially built at the level of individuals and are subsequently aggregated at the group level, thus allowing the exploration of prototypical models. Method: TSPA is based on vector auto-regression (VAR), an extension of univariate auto-regression models to multivariate time-series data. The application of TSPA is demonstrated in a sample of 87 outpatient psychotherapy patients who were monitored by postsession questionnaires. Prototypical mechanisms of change were derived from the aggregation of individual multivariate models of psychotherapy process. In a 2nd step, the associations between mechanisms of change (TSPA) and pre- to postsymptom change were explored. Results: TSPA allowed a prototypical process pattern to be identified, where patient's alliance and self-efficacy were linked by a temporal feedback-loop. Furthermore, therapist's stability over time in both mastery and clarification interventions was positively associated with better outcomes. Conclusions: TSPA is a statistical tool that sheds new light on temporal mechanisms of change. Through this approach, clinicians may gain insight into prototypical patterns of change in psychotherapy.
Resumo:
The rank-based nonlinear predictability score was recently introduced as a test for determinism in point processes. We here adapt this measure to time series sampled from time-continuous flows. We use noisy Lorenz signals to compare this approach against a classical amplitude-based nonlinear prediction error. Both measures show an almost identical robustness against Gaussian white noise. In contrast, when the amplitude distribution of the noise has a narrower central peak and heavier tails than the normal distribution, the rank-based nonlinear predictability score outperforms the amplitude-based nonlinear prediction error. For this type of noise, the nonlinear predictability score has a higher sensitivity for deterministic structure in noisy signals. It also yields a higher statistical power in a surrogate test of the null hypothesis of linear stochastic correlated signals. We show the high relevance of this improved performance in an application to electroencephalographic (EEG) recordings from epilepsy patients. Here the nonlinear predictability score again appears of higher sensitivity to nonrandomness. Importantly, it yields an improved contrast between signals recorded from brain areas where the first ictal EEG signal changes were detected (focal EEG signals) versus signals recorded from brain areas that were not involved at seizure onset (nonfocal EEG signals).
Resumo:
Within the context of exoplanetary atmospheres, we present a comprehensive linear analysis of forced, damped, magnetized shallow water systems, exploring the effects of dimensionality, geometry (Cartesian, pseudo-spherical, and spherical), rotation, magnetic tension, and hydrodynamic and magnetic sources of friction. Across a broad range of conditions, we find that the key governing equation for atmospheres and quantum harmonic oscillators are identical, even when forcing (stellar irradiation), sources of friction (molecular viscosity, Rayleigh drag, and magnetic drag), and magnetic tension are included. The global atmospheric structure is largely controlled by a single key parameter that involves the Rossby and Prandtl numbers. This near-universality breaks down when either molecular viscosity or magnetic drag acts non-uniformly across latitude or a poloidal magnetic field is present, suggesting that these effects will introduce qualitative changes to the familiar chevron-shaped feature witnessed in simulations of atmospheric circulation. We also find that hydrodynamic and magnetic sources of friction have dissimilar phase signatures and affect the flow in fundamentally different ways, implying that using Rayleigh drag to mimic magnetic drag is inaccurate. We exhaustively lay down the theoretical formalism (dispersion relations, governing equations, and time-dependent wave solutions) for a broad suite of models. In all situations, we derive the steady state of an atmosphere, which is relevant to interpreting infrared phase and eclipse maps of exoplanetary atmospheres. We elucidate a pinching effect that confines the atmospheric structure to be near the equator. Our suite of analytical models may be used to develop decisively physical intuition and as a reference point for three-dimensional magnetohydrodynamic simulations of atmospheric circulation.
Resumo:
We present a comprehensive analytical study of radiative transfer using the method of moments and include the effects of non-isotropic scattering in the coherent limit. Within this unified formalism, we derive the governing equations and solutions describing two-stream radiative transfer (which approximates the passage of radiation as a pair of outgoing and incoming fluxes), flux-limited diffusion (which describes radiative transfer in the deep interior) and solutions for the temperature-pressure profiles. Generally, the problem is mathematically under-determined unless a set of closures (Eddington coefficients) is specified. We demonstrate that the hemispheric (or hemi-isotropic) closure naturally derives from the radiative transfer equation if energy conservation is obeyed, while the Eddington closure produces spurious enhancements of both reflected light and thermal emission. We concoct recipes for implementing two-stream radiative transfer in stand-alone numerical calculations and general circulation models. We use our two-stream solutions to construct toy models of the runaway greenhouse effect. We present a new solution for temperature-pressure profiles with a non-constant optical opacity and elucidate the effects of non-isotropic scattering in the optical and infrared. We derive generalized expressions for the spherical and Bond albedos and the photon deposition depth. We demonstrate that the value of the optical depth corresponding to the photosphere is not always 2/3 (Milne's solution) and depends on a combination of stellar irradiation, internal heat and the properties of scattering both in optical and infrared. Finally, we derive generalized expressions for the total, net, outgoing and incoming fluxes in the convective regime.
Resumo:
Mathematical models of disease progression predict disease outcomes and are useful epidemiological tools for planners and evaluators of health interventions. The R package gems is a tool that simulates disease progression in patients and predicts the effect of different interventions on patient outcome. Disease progression is represented by a series of events (e.g., diagnosis, treatment and death), displayed in a directed acyclic graph. The vertices correspond to disease states and the directed edges represent events. The package gems allows simulations based on a generalized multistate model that can be described by a directed acyclic graph with continuous transition-specific hazard functions. The user can specify an arbitrary hazard function and its parameters. The model includes parameter uncertainty, does not need to be a Markov model, and may take the history of previous events into account. Applications are not limited to the medical field and extend to other areas where multistate simulation is of interest. We provide a technical explanation of the multistate models used by gems, explain the functions of gems and their arguments, and show a sample application.
Resumo:
Developmental assembly of the renal microcirculation is a precise and coordinated process now accessible to experimental scrutiny. Although definition of the cellular and molecular determinants is incomplete, recent findings have reframed concepts and questions about the origins of vascular cells in the glomerulus and the molecules that direct cell recruitment, specialization and morphogenesis. New findings illustrate principles that may be applied to defining critical steps in microvascular repair following glomerular injury. Developmental assembly of endothelial, mesangial and epithelial cells into glomerular capillaries requires that a coordinated, temporally defined series of steps occur in an anatomically ordered sequence. Recent evidence shows that both vasculogenic and angiogenic processes participate. Local signals direct cell migration, proliferation, differentiation, cell-cell recognition, formation of intercellular connections, and morphogenesis. Growth factor receptor tyrosine kinases on vascular cells are important mediators of many of these events. Cultured cell systems have suggested that basic fibroblast growth factor (bFGF), hepatocyte growth factor (HGF), and vascular endothelial growth factor (VEGF) promote endothelial cell proliferation, migration or morphogenesis, while genetic deletion experiments have defined an important role for PDGF beta receptors and platelet-derived growth factor (PDGF) B in glomerular development. Receptor tyrosine kinases that convey non-proliferative signals also contribute in kidney and other sites. The EphB1 receptor, one of a diverse class of Eph receptors implicated in neural cell targeting, directs renal endothelial migration, cell-cell recognition and assembly, and is expressed with its ligand in developing glomeruli. Endothelial TIE2 receptors bind angiopoietins (1 and 2), the products of adjacent supportive cells, to signals direct capillary maturation in a sequence that defines cooperative roles for cells of different lineages. Ultimately, definition of the cellular steps and molecular sequence that direct microvascular cell assembly promises to identify therapeutic targets for repair and adaptive remodeling of injured glomeruli.
Resumo:
Several lines of genetic, archeological and paleontological evidence suggest that anatomically modern humans (Homo sapiens) colonized the world in the last 60,000 years by a series of migrations originating from Africa (e.g. Liu et al., 2006; Handley et al., 2007; Prugnolle, Manica, and Balloux, 2005; Ramachandran et al. 2005; Li et al. 2008; Deshpande et al. 2009; Mellars, 2006a, b; Lahr and Foley, 1998; Gravel et al., 2011; Rasmussen et al., 2011). With the progress of ancient DNA analysis, it has been shown that archaic humans hybridized with modern humans outside Africa. Recent direct analyses of fossil nuclear DNA have revealed that 1–4 percent of the genome of Eurasian has been likely introgressed by Neanderthal genes (Green et al., 2010; Reich et al., 2010; Vernot and Akey, 2014; Sankararaman et al., 2014; Prufer et al., 2014; Wall et al., 2013), with Papua New Guineans and Australians showing even larger levels of admixture with Denisovans (Reich et al., 2010; Skoglund and Jakobsson, 2011; Reich et al., 2011; Rasmussen et al., 2011). It thus appears that the past history of our species has been more complex than previously anticipated (Alves et al., 2012), and that modern humans hybridized several times with local hominins during their expansion out of Africa, but the exact mode, time and location of these hybridizations remain to be clarifi ed (Ibid.; Wall et al., 2013). In this context, we review here a general model of admixture during range expansion, which lead to some predictions about expected patterns of introgression that are relevant to modern human evolution.
Resumo:
This study compares gridded European seasonal series of surface air temperature (SAT) and precipitation (PRE) reconstructions with a regional climate simulation over the period 1500–1990. The area is analysed separately for nine subareas that represent the majority of the climate diversity in the European sector. In their spatial structure, an overall good agreement is found between the reconstructed and simulated climate features across Europe, supporting consistency in both products. Systematic biases between both data sets can be explained by a priori known deficiencies in the simulation. Simulations and reconstructions, however, largely differ in the temporal evolution of past climate for European subregions. In particular, the simulated anomalies during the Maunder and Dalton minima show stronger response to changes in the external forcings than recorded in the reconstructions. Although this disagreement is to some extent expected given the prominent role of internal variability in the evolution of regional temperature and precipitation, a certain degree of agreement is a priori expected in variables directly affected by external forcings. In this sense, the inability of the model to reproduce a warm period similar to that recorded for the winters during the first decades of the 18th century in the reconstructions is indicative of fundamental limitations in the simulation that preclude reproducing exceptionally anomalous conditions. Despite these limitations, the simulated climate is a physically consistent data set, which can be used as a benchmark to analyse the consistency and limitations of gridded reconstructions of different variables. A comparison of the leading modes of SAT and PRE variability indicates that reconstructions are too simplistic, especially for precipitation, which is associated with the linear statistical techniques used to generate the reconstructions. The analysis of the co-variability between sea level pressure (SLP) and SAT and PRE in the simulation yields a result which resembles the canonical co-variability recorded in the observations for the 20th century. However, the same analysis for reconstructions exhibits anomalously low correlations, which points towards a lack of dynamical consistency between independent reconstructions.