42 resultados para Internal relationship models
Resumo:
One of the earliest accounts of duration perception by Karl von Vierordt implied a common process underlying the timing of intervals in the sub-second and the second range. To date, there are two major explanatory approaches for the timing of brief intervals: the Common Timing Hypothesis and the Distinct Timing Hypothesis. While the common timing hypothesis also proceeds from a unitary timing process, the distinct timing hypothesis suggests two dissociable, independent mechanisms for the timing of intervals in the sub-second and the second range, respectively. In the present paper, we introduce confirmatory factor analysis (CFA) to elucidate the internal structure of interval timing in the sub-second and the second range. Our results indicate that the assumption of two mechanisms underlying the processing of intervals in the second and the sub-second range might be more appropriate than the assumption of a unitary timing mechanism. In contrast to the basic assumption of the distinct timing hypothesis, however, these two timing mechanisms are closely associated with each other and share 77% of common variance. This finding suggests either a strong functional relationship between the two timing mechanisms or a hierarchically organized internal structure. Findings are discussed in the light of existing psychophysical and neurophysiological data.
Resumo:
The most influential theoretical account in time psychophysics assumes the existence of a unitary internal clock based on neural counting. The distinct timing hypothesis, on the other hand, suggests an automatic timing mechanism for processing of durations in the sub-second range and a cognitively controlled timing mechanism for processing of durations in the range of seconds. Although several psychophysical approaches can be applied for identifying the internal structure of interval timing in the second and sub-second range, the existing data provide a puzzling picture of rather inconsistent results. In the present chapter, we introduce confirmatory factor analysis (CFA) to further elucidate the internal structure of interval timing performance in the sub-second and second range. More specifically, we investigated whether CFA would rather support the notion of a unitary timing mechanism or of distinct timing mechanisms underlying interval timing in the sub-second and second range, respectively. The assumption of two distinct timing mechanisms which are completely independent of each other was not supported by our data. The model assuming a unitary timing mechanism underlying interval timing in both the sub-second and second range fitted the empirical data much better. Eventually, we also tested a third model assuming two distinct, but functionally related mechanisms. The correlation between the two latent variables representing the hypothesized timing mechanisms was rather high and comparison of fit indices indicated that the assumption of two associated timing mechanisms described the observed data better than only one latent variable. Models are discussed in the light of the existing psychophysical and neurophysiological data.
Resumo:
Motive-oriented therapeutic relationship (MOTHER), a prescriptive concept based on an integrative form of case formulation, the Plan Analysis (PA) method (Caspar, in: Eells (ed.), Handbook of psychotherapy case formulations, 2007), has shown to be of particular relevance for the treatment of patients presenting with personality disorders, in particular contributing to better therapeutic outcome and to a more constructive development of the therapeutic alliance over time (Kramer et al., J Nerv Ment Dis 199:244–250, 2011). Several therapy models refer to MOTHER as intervention principle with regard to borderline and Narcissistic Personality Disorder (NPD) (Sachse et al., Clarification-oriented psychotherapy of narcissistic personality disorder, 2011; Caspar and Berger, in: Dulz et al. (eds.), Handbuch der Borderline-Störungen, 2011). The present case study discusses the case of Mark, a 40-year-old patient presenting with NPD, along with anxious, depressive and anger problems. This patient underwent a seven-session long pre-therapy process, based on psychiatric and psychotherapeutic principles complemented with PA and MOTHER, in preparation for further treatment. MOTHER will be illustrated with patient–therapist verbatim from session 4 and the links between MOTHER and confrontation techniques will be discussed in the context of process-outcome hypotheses, in particular the effect of MOTHER on symptom reduction.
Resumo:
Numerous studies reported a strong link between working memory capacity (WMC) and fluid intelligence (Gf), although views differ in respect to how close these two constructs are related to each other. In the present study, we used a WMC task with five levels of task demands to assess the relationship between WMC and Gf by means of a new methodological approach referred to as fixed-links modeling. Fixed-links models belong to the family of confirmatory factor analysis (CFA) and are of particular interest for experimental, repeated-measures designs. With this technique, processes systematically varying across task conditions can be disentangled from processes unaffected by the experimental manipulation. Proceeding from the assumption that experimental manipulation in a WMC task leads to increasing demands on WMC, the processes systematically varying across task conditions can be assumed to be WMC-specific. Processes not varying across task conditions, on the other hand, are probably independent of WMC. Fixed-links models allow for representing these two kinds of processes by two independent latent variables. In contrast to traditional CFA where a common latent variable is derived from the different task conditions, fixed-links models facilitate a more precise or purified representation of the WMC-related processes of interest. By using fixed-links modeling to analyze data of 200 participants, we identified a non-experimental latent variable, representing processes that remained constant irrespective of the WMC task conditions, and an experimental latent variable which reflected processes that varied as a function of experimental manipulation. This latter variable represents the increasing demands on WMC and, hence, was considered a purified measure of WMC controlled for the constant processes. Fixed-links modeling showed that both the purified measure of WMC (β = .48) as well as the constant processes involved in the task (β = .45) were related to Gf. Taken together, these two latent variables explained the same portion of variance of Gf as a single latent variable obtained by traditional CFA (β = .65) indicating that traditional CFA causes an overestimation of the effective relationship between WMC and Gf. Thus, fixed-links modeling provides a feasible method for a more valid investigation of the functional relationship between specific constructs.
Resumo:
BACKGROUND The success of an intervention to prevent the complications of an infection is influenced by the natural history of the infection. Assumptions about the temporal relationship between infection and the development of sequelae can affect the predicted effect size of an intervention and the sample size calculation. This study investigates how a mathematical model can be used to inform sample size calculations for a randomised controlled trial (RCT) using the example of Chlamydia trachomatis infection and pelvic inflammatory disease (PID). METHODS We used a compartmental model to imitate the structure of a published RCT. We considered three different processes for the timing of PID development, in relation to the initial C. trachomatis infection: immediate, constant throughout, or at the end of the infectious period. For each process we assumed that, of all women infected, the same fraction would develop PID in the absence of an intervention. We examined two sets of assumptions used to calculate the sample size in a published RCT that investigated the effect of chlamydia screening on PID incidence. We also investigated the influence of the natural history parameters of chlamydia on the required sample size. RESULTS The assumed event rates and effect sizes used for the sample size calculation implicitly determined the temporal relationship between chlamydia infection and PID in the model. Even small changes in the assumed PID incidence and relative risk (RR) led to considerable differences in the hypothesised mechanism of PID development. The RR and the sample size needed per group also depend on the natural history parameters of chlamydia. CONCLUSIONS Mathematical modelling helps to understand the temporal relationship between an infection and its sequelae and can show how uncertainties about natural history parameters affect sample size calculations when planning a RCT.
Resumo:
When a firearm projectile hits a biological target a spray of biological material (e.g., blood and tissue fragments) can be propelled from the entrance wound back towards the firearm. This phenomenon has become known as "backspatter" and if caused by contact shots or shots from short distances traces of backspatter may reach, consolidate on, and be recovered from, the inside surfaces of the firearm. Thus, a comprehensive investigation of firearm-related crimes must not only comprise of wound ballistic assessment but also backspatter analysis, and may even take into account potential correlations between these emergences. The aim of the present study was to evaluate and expand the applicability of the "triple contrast" method by probing its compatibility with forensic analysis of nuclear and mitochondrial DNA and the simultaneous investigation of co-extracted mRNA and miRNA from backspatter collected from internal components of different types of firearms after experimental shootings. We demonstrate that "triple contrast" stained biological samples collected from the inside surfaces of firearms are amenable to forensic co-analysis of DNA and RNA and permit sequence analysis of the entire mtDNA displacement-loop, even for "low template" DNA amounts that preclude standard short tandem repeat DNA analysis. Our findings underscore the "triple contrast" method's usefulness as a research tool in experimental forensic ballistics.
Resumo:
This study compares gridded European seasonal series of surface air temperature (SAT) and precipitation (PRE) reconstructions with a regional climate simulation over the period 1500–1990. The area is analysed separately for nine subareas that represent the majority of the climate diversity in the European sector. In their spatial structure, an overall good agreement is found between the reconstructed and simulated climate features across Europe, supporting consistency in both products. Systematic biases between both data sets can be explained by a priori known deficiencies in the simulation. Simulations and reconstructions, however, largely differ in the temporal evolution of past climate for European subregions. In particular, the simulated anomalies during the Maunder and Dalton minima show stronger response to changes in the external forcings than recorded in the reconstructions. Although this disagreement is to some extent expected given the prominent role of internal variability in the evolution of regional temperature and precipitation, a certain degree of agreement is a priori expected in variables directly affected by external forcings. In this sense, the inability of the model to reproduce a warm period similar to that recorded for the winters during the first decades of the 18th century in the reconstructions is indicative of fundamental limitations in the simulation that preclude reproducing exceptionally anomalous conditions. Despite these limitations, the simulated climate is a physically consistent data set, which can be used as a benchmark to analyse the consistency and limitations of gridded reconstructions of different variables. A comparison of the leading modes of SAT and PRE variability indicates that reconstructions are too simplistic, especially for precipitation, which is associated with the linear statistical techniques used to generate the reconstructions. The analysis of the co-variability between sea level pressure (SLP) and SAT and PRE in the simulation yields a result which resembles the canonical co-variability recorded in the observations for the 20th century. However, the same analysis for reconstructions exhibits anomalously low correlations, which points towards a lack of dynamical consistency between independent reconstructions.
Resumo:
Effects of conspecific neighbours on survival and growth of trees have been found to be related to species abundance. Both positive and negative relationships may explain observed abundance patterns. Surprisingly, it is rarely tested whether such relationships could be biased or even spurious due to transforming neighbourhood variables or influences of spatial aggregation, distance decay of neighbour effects and standardization of effect sizes. To investigate potential biases, communities of 20 identical species were simulated with log-series abundances but without species-specific interactions. No relationship of conspecific neighbour effects on survival or growth with species abundance was expected. Survival and growth of individuals was simulated in random and aggregated spatial patterns using no, linear, or squared distance decay of neighbour effects. Regression coefficients of statistical neighbourhood models were unbiased and unrelated to species abundance. However, variation in the number of conspecific neighbours was positively or negatively related to species abundance depending on transformations of neighbourhood variables, spatial pattern and distance decay. Consequently, effect sizes and standardized regression coefficients, often used in model fitting across large numbers of species, were also positively or negatively related to species abundance depending on transformation of neighbourhood variables, spatial pattern and distance decay. Tests using randomized tree positions and identities provide the best benchmarks by which to critically evaluate relationships of effect sizes or standardized regression coefficients with tree species abundance. This will better guard against potential misinterpretations.
Resumo:
INTRODUCTION This paper focuses exclusively on experimental models with ultra high dilutions (i.e. beyond 10(-23)) that have been submitted to replication scrutiny. It updates previous surveys, considers suggestions made by the research community and compares the state of replication in 1994 with that in 2015. METHODS Following literature research, biochemical, immunological, botanical, cell biological and zoological studies on ultra high dilutions (potencies) were included. Reports were grouped into initial studies, laboratory-internal, multicentre and external replications. Repetition could yield either comparable, or zero, or opposite results. The null-hypothesis was that test and control groups would not be distinguishable (zero effect). RESULTS A total of 126 studies were found. From these, 28 were initial studies. When all 98 replicative studies were considered, 70.4% (i.e. 69) reported a result comparable to that of the initial study, 20.4% (20) zero effect and 9.2% (9) an opposite result. Both for the studies until 1994 and the studies 1995-2015 the null-hypothesis (dominance of zero results) should be rejected. Furthermore, the odds of finding a comparable result are generally higher than of finding an opposite result. Although this is true for all three types of replication studies, the fraction of comparable studies diminishes from laboratory-internal (total 82.9%) to multicentre (total 75%) to external (total 48.3%), while the fraction of opposite results was 4.9%, 10.7% and 13.8%. Furthermore, it became obvious that the probability of an external replication producing comparable results is bigger for models that had already been further scrutinized by the initial researchers. CONCLUSIONS We found 28 experimental models which underwent replication. In total, 24 models were replicated with comparable results, 12 models with zero effect, and 6 models with opposite results. Five models were externally reproduced with comparable results. We encourage further replications of studies in order to learn more about the model systems used.
Resumo:
OBJECTIVES The aim of the present longitudinal study was to investigate bacterial colonization of the internal implant cavity and to evaluate a possible association with peri-implant bone loss. METHODS A total of 264 paper point samples were harvested from the intra-implant cavity of 66 implants in 26 patients immediately following implant insertion and after 3, 4, and 12 months. Samples were evaluated for Aggregatibacter actinomycetemcomitans, Fusobacterium nucleatum, Porphyromonas gingivalis, Prevotella intermedia, Treponema denticola, and Tannerella forsythia as well as total bacterial counts by real-time PCR. Bone loss was evaluated on standardized radiographs up to 25 months after implant insertion. For the statistical analysis of the data, mixed effects models were fitted. RESULTS There was an increase in the frequency of detection as well as in the mean counts of the selected bacteria over time. The evaluation of the target bacteria revealed a significant association of Pr. intermedia at 4 and 12 months with peri-implant bone loss at 25 months (4 months: P = 0.009; 12 months: P = 0.021). CONCLUSIONS The present study could demonstrate a progressive colonization by periodontopathogenic bacteria in the internal cavities of two-piece implants. The results suggest that internal colonization with Pr. intermedia was associated with peri-implant bone loss.
Resumo:
Despite the strong increase in observational data on extrasolar planets, the processes that led to the formation of these planets are still not well understood. However, thanks to the high number of extrasolar planets that have been discovered, it is now possible to look at the planets as a population that puts statistical constraints on theoretical formation models. A method that uses these constraints is planetary population synthesis where synthetic planetary populations are generated and compared to the actual population. The key element of the population synthesis method is a global model of planet formation and evolution. These models directly predict observable planetary properties based on properties of the natal protoplanetary disc, linking two important classes of astrophysical objects. To do so, global models build on the simplified results of many specialized models that address one specific physical mechanism. We thoroughly review the physics of the sub-models included in global formation models. The sub-models can be classified as models describing the protoplanetary disc (of gas and solids), those that describe one (proto)planet (its solid core, gaseous envelope and atmosphere), and finally those that describe the interactions (orbital migration and N-body interaction). We compare the approaches taken in different global models, discuss the links between specialized and global models, and identify physical processes that require improved descriptions in future work. We then shortly address important results of planetary population synthesis like the planetary mass function or the mass-radius relationship. With these statistical results, the global effects of physical mechanisms occurring during planet formation and evolution become apparent, and specialized models describing them can be put to the observational test. Owing to their nature as meta models, global models depend on the results of specialized models, and therefore on the development of the field of planet formation theory as a whole. Because there are important uncertainties in this theory, it is likely that the global models will in future undergo significant modifications. Despite these limitations, global models can already now yield many testable predictions. With future global models addressing the geophysical characteristics of the synthetic planets, it should eventually become possible to make predictions about the habitability of planets based on their formation and evolution.
Resumo:
We report quantitative results from three brittle thrust wedge experiments, comparing numerical results directly with each other and with corresponding analogue results. We first test whether the participating codes reproduce predictions from analytical critical taper theory. Eleven codes pass the stable wedge test, showing negligible internal deformation and maintaining the initial surface slope upon horizontal translation over a frictional interface. Eight codes participated in the unstable wedge test that examines the evolution of a wedge by thrust formation from a subcritical state to the critical taper geometry. The critical taper is recovered, but the models show two deformation modes characterised by either mainly forward dipping thrusts or a series of thrust pop-ups. We speculate that the two modes are caused by differences in effective basal boundary friction related to different algorithms for modelling boundary friction. The third experiment examines stacking of forward thrusts that are translated upward along a backward thrust. The results of the seven codes that run this experiment show variability in deformation style, number of thrusts, thrust dip angles and surface slope. Overall, our experiments show that numerical models run with different numerical techniques can successfully simulate laboratory brittle thrust wedge models at the cm-scale. In more detail, however, we find that it is challenging to reproduce sandbox-type setups numerically, because of frictional boundary conditions and velocity discontinuities. We recommend that future numerical-analogue comparisons use simple boundary conditions and that the numerical Earth Science community defines a plasticity test to resolve the variability in model shear zones.