171 resultados para complexity theory
Resumo:
From a theoretical perspective, an extension to the Full Range leadership Theory (FRLT) seems needed. In this paper, we explain why instrumental leadership--a class of leadership includes leader behaviors focusing on task and strategic aspects that are neither values nor exchange oriented--can fulfill this extension. Instrument leadership is composed of four factors: environmental monitoring, strategy formulation and implementation, path-goal facilitation and outcome monitoring; these aspects of leadership are currently not included in any of the FRLT's nine leadership scales (as measured by the MLQ--Multifactor Leadership Questionnaire). We present results from two empirical studies using very large samples from a wide array of countries (N > 3,000) to examine the factorial, discriminant and criterion-related validity of the instrumental leadership scales. We find support for a four-factor instrumental leadership model, which explains incremental variance in leader outcomes in over and above transactional and transformational leadership.
Resumo:
In the 1920s, Ronald Fisher developed the theory behind the p value and Jerzy Neyman and Egon Pearson developed the theory of hypothesis testing. These distinct theories have provided researchers important quantitative tools to confirm or refute their hypotheses. The p value is the probability to obtain an effect equal to or more extreme than the one observed presuming the null hypothesis of no effect is true; it gives researchers a measure of the strength of evidence against the null hypothesis. As commonly used, investigators will select a threshold p value below which they will reject the null hypothesis. The theory of hypothesis testing allows researchers to reject a null hypothesis in favor of an alternative hypothesis of some effect. As commonly used, investigators choose Type I error (rejecting the null hypothesis when it is true) and Type II error (accepting the null hypothesis when it is false) levels and determine some critical region. If the test statistic falls into that critical region, the null hypothesis is rejected in favor of the alternative hypothesis. Despite similarities between the two, the p value and the theory of hypothesis testing are different theories that often are misunderstood and confused, leading researchers to improper conclusions. Perhaps the most common misconception is to consider the p value as the probability that the null hypothesis is true rather than the probability of obtaining the difference observed, or one that is more extreme, considering the null is true. Another concern is the risk that an important proportion of statistically significant results are falsely significant. Researchers should have a minimum understanding of these two theories so that they are better able to plan, conduct, interpret, and report scientific experiments.
Resumo:
The aim of this study was to assess a population of patients with diabetes mellitus by means of the INTERMED, a classification system for case complexity integrating biological, psychosocial and health care related aspects of disease. The main hypothesis was that the INTERMED would identify distinct clusters of patients with different degrees of case complexity and different clinical outcomes. Patients (n=61) referred to a tertiary reference care centre were evaluated with the INTERMED and followed 9 months for HbA1c values and 6 months for health care utilisation. Cluster analysis revealed two clusters: cluster 1 (62%) consisting of complex patients with high INTERMED scores and cluster 2 (38%) consisting of less complex patients with lower INTERMED. Cluster 1 patients showed significantly higher HbA1c values and a tendency for increased health care utilisation. Total INTERMED scores were significantly related to HbA1c and explained 21% of its variance. In conclusion, different clusters of patients with different degrees of case complexity were identified by the INTERMED, allowing the detection of highly complex patients at risk for poor diabetes control. The INTERMED therefore provides an objective basis for clinical and scientific progress in diabetes mellitus. Ongoing intervention studies will have to confirm these preliminary data and to evaluate if management strategies based on the INTERMED profiles will improve outcomes.
Resumo:
This paper evaluates the reception of Léon Walras' ideas in Russia before 1920. Despite an unfavourable institutional context, Walras was read by Russian economists. On the one hand, Bortkiewicz and Winiarski, who lived outside Russia and had the opportunity to meet and correspond with Walras, were first class readers and very good ambassadors for Walras' ideas, while on the other, the economists living in Russia were more selective in their readings. They restricted themselves to Walras' Elements of Pure Economics, in particular, its theory of exchange, while ignoring its theory of production. We introduce a cultural argument to explain their selective reading. JEL classification numbers: B 13, B 19.
Resumo:
Depth-averaged velocities and unit discharges within a 30 km reach of one of the world's largest rivers, the Rio Parana, Argentina, were simulated using three hydrodynamic models with different process representations: a reduced complexity (RC) model that neglects most of the physics governing fluid flow, a two-dimensional model based on the shallow water equations, and a three-dimensional model based on the Reynolds-averaged Navier-Stokes equations. Row characteristics simulated using all three models were compared with data obtained by acoustic Doppler current profiler surveys at four cross sections within the study reach. This analysis demonstrates that, surprisingly, the performance of the RC model is generally equal to, and in some instances better than, that of the physics based models in terms of the statistical agreement between simulated and measured flow properties. In addition, in contrast to previous applications of RC models, the present study demonstrates that the RC model can successfully predict measured flow velocities. The strong performance of the RC model reflects, in part, the simplicity of the depth-averaged mean flow patterns within the study reach and the dominant role of channel-scale topographic features in controlling the flow dynamics. Moreover, the very low water surface slopes that typify large sand-bed rivers enable flow depths to be estimated reliably in the RC model using a simple fixed-lid planar water surface approximation. This approach overcomes a major problem encountered in the application of RC models in environments characterised by shallow flows and steep bed gradients. The RC model is four orders of magnitude faster than the physics based models when performing steady-state hydrodynamic calculations. However, the iterative nature of the RC model calculations implies a reduction in computational efficiency relative to some other RC models. A further implication of this is that, if used to simulate channel morphodynamics, the present RC model may offer only a marginal advantage in terms of computational efficiency over approaches based on the shallow water equations. These observations illustrate the trade off between model realism and efficiency that is a key consideration in RC modelling. Moreover, this outcome highlights a need to rethink the use of RC morphodynamic models in fluvial geomorphology and to move away from existing grid-based approaches, such as the popular cellular automata (CA) models, that remain essentially reductionist in nature. In the case of the world's largest sand-bed rivers, this might be achieved by implementing the RC model outlined here as one element within a hierarchical modelling framework that would enable computationally efficient simulation of the morphodynamics of large rivers over millennial time scales. (C) 2012 Elsevier B.V. All rights reserved.
Resumo:
This paper studies a risk measure inherited from ruin theory and investigates some of its properties. Specifically, we consider a value-at-risk (VaR)-type risk measure defined as the smallest initial capital needed to ensure that the ultimate ruin probability is less than a given level. This VaR-type risk measure turns out to be equivalent to the VaR of the maximal deficit of the ruin process in infinite time. A related Tail-VaR-type risk measure is also discussed.
Resumo:
Understanding the extent of genomic transcription and its functional relevance is a central goal in genomics research. However, detailed genome-wide investigations of transcriptome complexity in major mammalian organs have been scarce. Here, using extensive RNA-seq data, we show that transcription of the genome is substantially more widespread in the testis than in other organs across representative mammals. Furthermore, we reveal that meiotic spermatocytes and especially postmeiotic round spermatids have remarkably diverse transcriptomes, which explains the high transcriptome complexity of the testis as a whole. The widespread transcriptional activity in spermatocytes and spermatids encompasses protein-coding and long noncoding RNA genes but also poorly conserves intergenic sequences, suggesting that it may not be of immediate functional relevance. Rather, our analyses of genome-wide epigenetic data suggest that this prevalent transcription, which most likely promoted the birth of new genes during evolution, is facilitated by an overall permissive chromatin in these germ cells that results from extensive chromatin remodeling.
Resumo:
Schizophrenia is postulated to be the prototypical dysconnection disorder, in which hallucinations are the core symptom. Due to high heterogeneity in methodology across studies and the clinical phenotype, it remains unclear whether the structural brain dysconnection is global or focal and if clinical symptoms result from this dysconnection. In the present work, we attempt to clarify this issue by studying a population considered as a homogeneous genetic sub-type of schizophrenia, namely the 22q11.2 deletion syndrome (22q11.2DS). Cerebral MRIs were acquired for 46 patients and 48 age and gender matched controls (aged 6-26, respectively mean age = 15.20 ± 4.53 and 15.28 ± 4.35 years old). Using the Connectome mapper pipeline (connectomics.org) that combines structural and diffusion MRI, we created a whole brain network for each individual. Graph theory was used to quantify the global and local properties of the brain network organization for each participant. A global degree loss of 6% was found in patients' networks along with an increased Characteristic Path Length. After identifying and comparing hubs, a significant loss of degree in patients' hubs was found in 58% of the hubs. Based on Allen's brain network model for hallucinations, we explored the association between local efficiency and symptom severity. Negative correlations were found in the Broca's area (p < 0.004), the Wernicke area (p < 0.023) and a positive correlation was found in the dorsolateral prefrontal cortex (DLPFC) (p < 0.014). In line with the dysconnection findings in schizophrenia, our results provide preliminary evidence for a targeted alteration in the brain network hubs' organization in individuals with a genetic risk for schizophrenia. The study of specific disorganization in language, speech and thought regulation networks sharing similar network properties may help to understand their role in the hallucination mechanism.