31 resultados para Distributed Lag Non-linear Models
Resumo:
Introduction: According to the ecological view, coordination establishes byvirtueof social context. Affordances thought of as situational opportunities to interact are assumed to represent the guiding principles underlying decisions involved in interpersonal coordination. It’s generally agreed that affordances are not an objective part of the (social) environment but that they depend on the constructive perception of involved subjects. Theory and empirical data hold that cognitive operations enabling domain-specific efficacy beliefs are involved in the perception of affordances. The aim of the present study was to test the effects of these cognitive concepts in the subjective construction of local affordances and their influence on decision making in football. Methods: 71 football players (M = 24.3 years, SD = 3.3, 21 % women) from different divisions participated in the study. Participants were presented scenarios of offensive game situations. They were asked to take the perspective of the person on the ball and to indicate where they would pass the ball from within each situation. The participants stated their decisions in two conditions with different game score (1:0 vs. 0:1). The playing fields of all scenarios were then divided into ten zones. For each zone, participants were asked to rate their confidence in being able to pass the ball there (self-efficacy), the likelihood of the group staying in ball possession if the ball were passed into the zone (group-efficacy I), the likelihood of the ball being covered safely by a team member (pass control / group-efficacy II), and whether a pass would establish a better initial position to attack the opponents’ goal (offensive convenience). Answers were reported on visual analog scales ranging from 1 to 10. Data were analyzed specifying general linear models for binomially distributed data (Mplus). Maximum likelihood with non-normality robust standard errors was chosen to estimate parameters. Results: Analyses showed that zone- and domain-specific efficacy beliefs significantly affected passing decisions. Because of collinearity with self-efficacy and group-efficacy I, group-efficacy II was excluded from the models to ease interpretation of the results. Generally, zones with high values in the subjective ratings had a higher probability to be chosen as passing destination (βself-efficacy = 0.133, p < .001, OR = 1.142; βgroup-efficacy I = 0.128, p < .001, OR = 1.137; βoffensive convenience = 0.057, p < .01, OR = 1.059). There were, however, characteristic differences in the two score conditions. While group-efficacy I was the only significant predictor in condition 1 (βgroup-efficacy I = 0.379, p < .001), only self-efficacy and offensive convenience contributed to passing decisions in condition 2 (βself-efficacy = 0.135, p < .01; βoffensive convenience = 0.120, p < .001). Discussion: The results indicate that subjectively distinct attributes projected to playfield zones affect passing decisions. The study proposes a probabilistic alternative to Lewin’s (1951) hodological and deterministic field theory and enables insight into how dimensions of the psychological landscape afford passing behavior. Being part of a team, this psychological landscape is not only constituted by probabilities that refer to the potential and consequences of individual behavior, but also to that of the group system of which individuals are part of. Hence, in regulating action decisions in group settings, informers are extended to aspects referring to the group-level. References: Lewin, K. (1951). In D. Cartwright (Ed.), Field theory in social sciences: Selected theoretical papers by Kurt Lewin. New York: Harper & Brothers.
Resumo:
The literature on the erosive potential of drinks and other products is summarised, and aspects of the conduct of screening tests as well as possible correlations of the erosive potential with various solution parameters are discussed. The solution parameters that have been suggested as important include pH, acid concentration (with respect to buffer capacity and concentration of undissociated acid), degree of saturation, calcium and phosphate concentrations, and inhibitors of erosion. Based on the available data, it is concluded that the dominant factor in erosion is pH. The effect of buffer capacity seems to be pH dependent. The degree of saturation probably has a non-linear relationship with erosion. While calcium at elevated concentrations is known to reduce erosion effectively, it is not known whether it is important at naturally occurring concentrations. Fluoride at naturally occurring concentrations is inversely correlated with erosive potential, but phosphate is probably not. Natural plant gums, notably pectin, do not inhibit erosion, so they are unlikely to interfere with the prediction of erosive potential. The non-linearity of some solution factors and interactions with pH need to be taken into account when developing multivariate models for predicting the erosive potential of different solutions. Finally, the erosive potential of solutions towards enamel and dentine might differ.
Resumo:
Here we present stable isotope data for vertical profiles of dissolved molybdenum of the modern euxinic water columns of the Black Sea and two deeps of the Baltic Sea. Dissolved molybdenum in all water samples is depleted in salinity-normalized concentration and enriched in the heavy isotope (δ98Mo values up to + 2.9‰) compared to previously published isotope data of sedimentary molybdenum from the same range of water depths. Furthermore, δ98Mo values of all water samples from the Black Sea and anoxic deeps of the Baltic Sea are heavier than open ocean water. The observed isotope fractionation between sediments and the anoxic water column of the Black Sea are in line with the model of thiomolybdates that scavenge to particles under reducing conditions. An extrapolation to a theoretical pure MoS42− solution indicates a fractionation constant between MoS42− and authigenic solid Mo of 0.5 ± 0.3‰. Measured waters with all thiomolybdates coexisting in various proportions show larger but non-linear fractionation. The best explanation for our field observations is Mo scavenging by the thiomolybdates, dominantly — but not exclusively — present in the form of MoS42−. The Mo isotopic compositions of samples from the sediments and anoxic water column of the Baltic Sea are in overall agreement with those of the Black Sea at intermediate depth and corresponding sulphide concentrations. The more dynamic changes of redox conditions in the Baltic deeps complicate the Black Sea-derived relationship between thiomolybdates and Mo isotopic composition. In particular, the occasional flushing/mixing, of the deep waters, affects the corresponding water column and sedimentary data. δ98Mo values of the upper oxic waters of both basins are higher than predicted by mixing models based on salinity variations. The results can be explained by non-conservative behaviour of Mo under suboxic to anoxic conditions in the shallow bottom parts of the basin, most pronounced on the NW shelf of the Black Sea.
Resumo:
Despite the impact of red blood cell (RBC) Life-spans in some disease areas such as diabetes or anemia of chronic kidney disease, there is no consensus on how to quantitatively best describe the process. Several models have been proposed to explain the elimination process of RBCs: random destruction process, homogeneous life-span model, or a series of 4-transit compartment model. The aim of this work was to explore the different models that have been proposed in literature, and modifications to those. The impact of choosing the right model on future outcomes prediction--in the above mentioned areas--was also investigated. Both data from indirect (clinical data) and direct life-span measurement (biotin-labeled data) methods were analyzed using non-linear mixed effects models. Analysis showed that: (1) predictions from non-steady state data will depend on the RBC model chosen; (2) the transit compartment model, which considers variation in life-span in the RBC population, better describes RBC survival data than the random destruction or homogenous life-span models; and (3) the additional incorporation of random destruction patterns, although improving the description of the RBC survival data, does not appear to provide a marked improvement when describing clinical data.
Resumo:
Despite widespread use of species-area relationships (SARs), dispute remains over the most representative SAR model. Using data of small-scale SARs of Estonian dry grassland communities, we address three questions: (1) Which model describes these SARs best when known artifacts are excluded? (2) How do deviating sampling procedures (marginal instead of central position of the smaller plots in relation to the largest plot; single values instead of average values; randomly located subplots instead of nested subplots) influence the properties of the SARs? (3) Are those effects likely to bias the selection of the best model? Our general dataset consisted of 16 series of nested-plots (1 cm(2)-100 m(2), any-part system), each of which comprised five series of subplots located in the four corners and the centre of the 100-m(2) plot. Data for the three pairs of compared sampling designs were generated from this dataset by subsampling. Five function types (power, quadratic power, logarithmic, Michaelis-Menten, Lomolino) were fitted with non-linear regression. In some of the communities, we found extremely high species densities (including bryophytes and lichens), namely up to eight species in 1 cm(2) and up to 140 species in 100 m(2), which appear to be the highest documented values on these scales. For SARs constructed from nested-plot average-value data, the regular power function generally was the best model, closely followed by the quadratic power function, while the logarithmic and Michaelis-Menten functions performed poorly throughout. However, the relative fit of the latter two models increased significantly relative to the respective best model when the single-value or random-sampling method was applied, however, the power function normally remained far superior. These results confirm the hypothesis that both single-value and random-sampling approaches cause artifacts by increasing stochasticity in the data, which can lead to the selection of inappropriate models.
Resumo:
Since 2010, the client base of online-trading service providers has grown significantly. Such companies enable small investors to access the stock market at advantageous rates. Because small investors buy and sell stocks in moderate amounts, they should consider fixed transaction costs, integral transaction units, and dividends when selecting their portfolio. In this paper, we consider the small investor’s problem of investing capital in stocks in a way that maximizes the expected portfolio return and guarantees that the portfolio risk does not exceed a prescribed risk level. Portfolio-optimization models known from the literature are in general designed for institutional investors and do not consider the specific constraints of small investors. We therefore extend four well-known portfolio-optimization models to make them applicable for small investors. We consider one nonlinear model that uses variance as a risk measure and three linear models that use the mean absolute deviation from the portfolio return, the maximum loss, and the conditional value-at-risk as risk measures. We extend all models to consider piecewise-constant transaction costs, integral transaction units, and dividends. In an out-of-sample experiment based on Swiss stock-market data and the cost structure of the online-trading service provider Swissquote, we apply both the basic models and the extended models; the former represent the perspective of an institutional investor, and the latter the perspective of a small investor. The basic models compute portfolios that yield on average a slightly higher return than the portfolios computed with the extended models. However, all generated portfolios yield on average a higher return than the Swiss performance index. There are considerable differences between the four risk measures with respect to the mean realized portfolio return and the standard deviation of the realized portfolio return.
Resumo:
Background: We previously found good psychometric properties of the Inventory for assessment of stress management skills (German translation: Inventar zur Erfassung von Stressbewältigungsfertigkeiten), ISBF, a short questionnaire for combined assessment of different perceived stress management skills in the general population. Here, we investigate whether stress management skills as measured by ISBF relate to cortisol stress reactivity in two independent studies, a laboratory study (study 1) and a field study (study 2). Methods: 35 healthy non-smoking and medication-free men in study 1 (age mean±SEM:38.0±1.6) and 35 male and female employees in study 2 (age mean±SEM:32.9±1.2) underwent an acute standardized psychosocial stress task combining public speaking and mental arithmetic in front of an audience. We assessed stress management skills (ISBF) and measured salivary cortisol before and after stress and several times up to 60 min (study 2) and 120 min (study 1) thereafter. Potential confounders were controlled. Results:. General linear models controlling for potential confounders revealed that in both studies, higher stress management skills (ISBF total score) were independently associated with lower cortisol levels before and after stress (main effects ISBF: p’s<.055) and lower cortisol stress reactivity (interaction ISBF-by-stress: p’s<.029). Post-hoc-testing of ISBF subscales suggest lower cortisol stress reactivity with higher “relaxation abilities” (both studies) and higher scores in the scale “cognitive strategies and problem solving” (study 2). Conclusions: Our findings suggest blunted increases in cortisol following stress with increasing stress management skills as measured by ISBF. This suggests that the ISBF not only relates to subjective psychological but also objective physiological stress indicators which may further underscore the validity of the questionnaire.
Resumo:
The mechanisms of Ar release from K-feldspar samples in laboratory experiments and during their geological history are assessed here. Modern petrology clearly established that the chemical and isotopic record of minerals is normally dominated by aqueous recrystallization. The laboratory critique is trickier, which explains why so many conflicting approaches have been able to survive long past their expiration date. Current models are evaluated for self-consistency; especially Arrhenian non-linearity leads to paradoxes. The models’ testable geological predictions suggest that temperature-based downslope extrapolations often overestimate observed geological Ar mobility substantially. An updated interpretation is based on the unrelatedness of geological behaviour to laboratory experiments. The isotopic record of K-feldspar in geological samples is not a unique function of temperature, as recrystallisation promoted by aqueous fluids is the predominant mechanism controlling isotope transport. K-feldspar should therefore be viewed as a hygrochronometer. Laboratory degassing proceeds from structural rearrangements and phase transitions such as are observed in situ at high temperature in Na and Pb feldspars. These effects violate the mathematics of an inert Fick’s Law matrix and preclude downslope extrapolation. The similar upward-concave, non-linear shapes of Arrhenius trajectories of many silicates, hydrous and anhydrous, are likely common manifestations of structural rearrangements in silicate structures.
Resumo:
Global wetlands are believed to be climate sensitive, and are the largest natural emitters of methane (CH4). Increased wetland CH4 emissions could act as a positive feedback to future warming. The Wetland and Wetland CH4 Inter-comparison of Models Project (WETCHIMP) investigated our present ability to simulate large-scale wetland characteristics and corresponding CH4 emissions. To ensure inter-comparability, we used a common experimental protocol driving all models with the same climate and carbon dioxide (CO2) forcing datasets. The WETCHIMP experiments were conducted for model equilibrium states as well as transient simulations covering the last century. Sensitivity experiments investigated model response to changes in selected forcing inputs (precipitation, temperature, and atmospheric CO2 concentration). Ten models participated, covering the spectrum from simple to relatively complex, including models tailored either for regional or global simulations. The models also varied in methods to calculate wetland size and location, with some models simulating wetland area prognostically, while other models relied on remotely sensed inundation datasets, or an approach intermediate between the two. Four major conclusions emerged from the project. First, the suite of models demonstrate extensive disagreement in their simulations of wetland areal extent and CH4 emissions, in both space and time. Simple metrics of wetland area, such as the latitudinal gradient, show large variability, principally between models that use inundation dataset information and those that independently determine wetland area. Agreement between the models improves for zonally summed CH4 emissions, but large variation between the models remains. For annual global CH4 emissions, the models vary by ±40% of the all-model mean (190 Tg CH4 yr−1). Second, all models show a strong positive response to increased atmospheric CO2 concentrations (857 ppm) in both CH4 emissions and wetland area. In response to increasing global temperatures (+3.4 °C globally spatially uniform), on average, the models decreased wetland area and CH4 fluxes, primarily in the tropics, but the magnitude and sign of the response varied greatly. Models were least sensitive to increased global precipitation (+3.9 % globally spatially uniform) with a consistent small positive response in CH4 fluxes and wetland area. Results from the 20th century transient simulation show that interactions between climate forcings could have strong non-linear effects. Third, we presently do not have sufficient wetland methane observation datasets adequate to evaluate model fluxes at a spatial scale comparable to model grid cells (commonly 0.5°). This limitation severely restricts our ability to model global wetland CH4 emissions with confidence. Our simulated wetland extents are also difficult to evaluate due to extensive disagreements between wetland mapping and remotely sensed inundation datasets. Fourth, the large range in predicted CH4 emission rates leads to the conclusion that there is both substantial parameter and structural uncertainty in large-scale CH4 emission models, even after uncertainties in wetland areas are accounted for.
Resumo:
BACKGROUND: How change comes about is hotly debated in psychotherapy research. One camp considers 'non-specific' or 'common factors', shared by different therapy approaches, as essential, whereas researchers of the other camp consider specific techniques as the essential ingredients of change. This controversy, however, suffers from unclear terminology and logical inconsistencies. The Taxonomy Project therefore aims at contributing to the definition and conceptualization of common factors of psychotherapy by analyzing their differential associations to standard techniques. METHODS: A review identified 22 common factors discussed in psychotherapy research literature. We conducted a survey, in which 68 psychotherapy experts assessed how common factors are implemented by specific techniques. Using hierarchical linear models, we predicted each common factor by techniques and by experts' age, gender and allegiance to a therapy orientation. RESULTS: Common factors differed largely in their relevance for technique implementation. Patient engagement, Affective experiencing and Therapeutic alliance were judged most relevant. Common factors also differed with respect to how well they could be explained by the set of techniques. We present detailed profiles of all common factors by the (positively or negatively) associated techniques. There were indications of a biased taxonomy not covering the embodiment of psychotherapy (expressed by body-centred techniques such as progressive muscle relaxation, biofeedback training and hypnosis). Likewise, common factors did not adequately represent effective psychodynamic and systemic techniques. CONCLUSION: This taxonomic endeavour is a step towards a clarification of important core constructs of psychotherapy. KEY PRACTITIONER MESSAGE: This article relates standard techniques of psychotherapy (well known to practising therapists) to the change factors/change mechanisms discussed in psychotherapy theory. It gives a short review of the current debate on the mechanisms by which psychotherapy works. We provide detailed profiles of change mechanisms and how they may be generated by practice techniques.
Resumo:
Directly imaged exoplanets are unexplored laboratories for the application of the spectral and temperature retrieval method, where the chemistry and composition of their atmospheres are inferred from inverse modeling of the available data. As a pilot study, we focus on the extrasolar gas giant HR 8799b, for which more than 50 data points are available. We upgrade our non-linear optimal estimation retrieval method to include a phenomenological model of clouds that requires the cloud optical depth and monodisperse particle size to be specified. Previous studies have focused on forward models with assumed values of the exoplanetary properties; there is no consensus on the best-fit values of the radius, mass, surface gravity, and effective temperature of HR 8799b. We show that cloud-free models produce reasonable fits to the data if the atmosphere is of super-solar metallicity and non-solar elemental abundances. Intermediate cloudy models with moderate values of the cloud optical depth and micron-sized particles provide an equally reasonable fit to the data and require a lower mean molecular weight. We report our best-fit values for the radius, mass, surface gravity, and effective temperature of HR 8799b. The mean molecular weight is about 3.8, while the carbon-to-oxygen ratio is about unity due to the prevalence of carbon monoxide. Our study emphasizes the need for robust claims about the nature of an exoplanetary atmosphere to be based on analyses involving both photometry and spectroscopy and inferred from beyond a few photometric data points, such as are typically reported for hot Jupiters.
Resumo:
Since 2010, the client base of online-trading service providers has grown significantly. Such companies enable small investors to access the stock market at advantageous rates. Because small investors buy and sell stocks in moderate amounts, they should consider fixed transaction costs, integral transaction units, and dividends when selecting their portfolio. In this paper, we consider the small investor’s problem of investing capital in stocks in a way that maximizes the expected portfolio return and guarantees that the portfolio risk does not exceed a prescribed risk level. Portfolio-optimization models known from the literature are in general designed for institutional investors and do not consider the specific constraints of small investors. We therefore extend four well-known portfolio-optimization models to make them applicable for small investors. We consider one nonlinear model that uses variance as a risk measure and three linear models that use the mean absolute deviation from the portfolio return, the maximum loss, and the conditional value-at-risk as risk measures. We extend all models to consider piecewise-constant transaction costs, integral transaction units, and dividends. In an out-of-sample experiment based on Swiss stock-market data and the cost structure of the online-trading service provider Swissquote, we apply both the basic models and the extended models; the former represent the perspective of an institutional investor, and the latter the perspective of a small investor. The basic models compute portfolios that yield on average a slightly higher return than the portfolios computed with the extended models. However, all generated portfolios yield on average a higher return than the Swiss performance index. There are considerable differences between the four risk measures with respect to the mean realized portfolio return and the standard deviation of the realized portfolio return.
Resumo:
A quantum simulator of U(1) lattice gauge theories can be implemented with superconducting circuits. This allows the investigation of confined and deconfined phases in quantum link models, and of valence bond solid and spin liquid phases in quantum dimer models. Fractionalized confining strings and the real-time dynamics of quantum phase transitions are accessible as well. Here we show how state-of-the-art superconducting technology allows us to simulate these phenomena in relatively small circuit lattices. By exploiting the strong non-linear couplings between quantized excitations emerging when superconducting qubits are coupled, we show how to engineer gauge invariant Hamiltonians, including ring-exchange and four-body Ising interactions. We demonstrate that, despite decoherence and disorder effects, minimal circuit instances allow us to investigate properties such as the dynamics of electric flux strings, signaling confinement in gauge invariant field theories. The experimental realization of these models in larger superconducting circuits could address open questions beyond current computational capability.
Resumo:
Osteoporotic proximal femur fractures are caused by low energy trauma, typically when falling on the hip from standing height. Finite element simulations, widely used to predict the fracture load of femora in fall, usually include neither mass-related inertial effects, nor the viscous part of bone's material behavior. The aim of this study was to elucidate if quasi-static non-linear homogenized finite element analyses can predict in vitro mechanical properties of proximal femora assessed in dynamic drop tower experiments. The case-specific numerical models of thirteen femora predicted the strength (R2=0.84, SEE=540 N, 16.2%), stiffness (R2=0.82, SEE=233 N/mm, 18.0%) and fracture energy (R2=0.72, SEE=3.85 J, 39.6%); and provided fair qualitative matches with the fracture patterns. The influence of material anisotropy was negligible for all predictions. These results suggest that quasi-static homogenized finite element analysis may be used to predict mechanical properties of proximal femora in the dynamic sideways fall situation.
Resumo:
What are the conditions under which some austerity programmes rely on substantial cuts to social spending? More specifically, do the partisan complexion and the type of government condition the extent to which austerity policies imply welfare state retrenchment? This article demonstrates that large budget consolidations tend to be associated with welfare state retrenchment. The findings support a partisan and a politico-institutionalist argument: (i) in periods of fiscal consolidation, welfare state retrenchment tends to be more pronounced under left-wing governments; (ii) since welfare state retrenchment is electorally and politically risky, it also tends to be more pronounced when pursued by a broad pro-reform coalition government. Therefore, the article shows that during budget consolidations implemented by left-wing broad coalition governments, welfare state retrenchment is greatest. Using long-run multipliers from autoregressive distributed lag models on 17 OECD countries during the 1982–2009 period, substantial support is found for these expectations.