985 resultados para Sequential machine theory


Relevância:

20.00% 20.00%

Publicador:

Resumo:

From a theoretical perspective, an extension to the Full Range leadership Theory (FRLT) seems needed. In this paper, we explain why instrumental leadership--a class of leadership includes leader behaviors focusing on task and strategic aspects that are neither values nor exchange oriented--can fulfill this extension. Instrument leadership is composed of four factors: environmental monitoring, strategy formulation and implementation, path-goal facilitation and outcome monitoring; these aspects of leadership are currently not included in any of the FRLT's nine leadership scales (as measured by the MLQ--Multifactor Leadership Questionnaire). We present results from two empirical studies using very large samples from a wide array of countries (N > 3,000) to examine the factorial, discriminant and criterion-related validity of the instrumental leadership scales. We find support for a four-factor instrumental leadership model, which explains incremental variance in leader outcomes in over and above transactional and transformational leadership.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In the 1920s, Ronald Fisher developed the theory behind the p value and Jerzy Neyman and Egon Pearson developed the theory of hypothesis testing. These distinct theories have provided researchers important quantitative tools to confirm or refute their hypotheses. The p value is the probability to obtain an effect equal to or more extreme than the one observed presuming the null hypothesis of no effect is true; it gives researchers a measure of the strength of evidence against the null hypothesis. As commonly used, investigators will select a threshold p value below which they will reject the null hypothesis. The theory of hypothesis testing allows researchers to reject a null hypothesis in favor of an alternative hypothesis of some effect. As commonly used, investigators choose Type I error (rejecting the null hypothesis when it is true) and Type II error (accepting the null hypothesis when it is false) levels and determine some critical region. If the test statistic falls into that critical region, the null hypothesis is rejected in favor of the alternative hypothesis. Despite similarities between the two, the p value and the theory of hypothesis testing are different theories that often are misunderstood and confused, leading researchers to improper conclusions. Perhaps the most common misconception is to consider the p value as the probability that the null hypothesis is true rather than the probability of obtaining the difference observed, or one that is more extreme, considering the null is true. Another concern is the risk that an important proportion of statistically significant results are falsely significant. Researchers should have a minimum understanding of these two theories so that they are better able to plan, conduct, interpret, and report scientific experiments.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Radioactive soil-contamination mapping and risk assessment is a vital issue for decision makers. Traditional approaches for mapping the spatial concentration of radionuclides employ various regression-based models, which usually provide a single-value prediction realization accompanied (in some cases) by estimation error. Such approaches do not provide the capability for rigorous uncertainty quantification or probabilistic mapping. Machine learning is a recent and fast-developing approach based on learning patterns and information from data. Artificial neural networks for prediction mapping have been especially powerful in combination with spatial statistics. A data-driven approach provides the opportunity to integrate additional relevant information about spatial phenomena into a prediction model for more accurate spatial estimates and associated uncertainty. Machine-learning algorithms can also be used for a wider spectrum of problems than before: classification, probability density estimation, and so forth. Stochastic simulations are used to model spatial variability and uncertainty. Unlike regression models, they provide multiple realizations of a particular spatial pattern that allow uncertainty and risk quantification. This paper reviews the most recent methods of spatial data analysis, prediction, and risk mapping, based on machine learning and stochastic simulations in comparison with more traditional regression models. The radioactive fallout from the Chernobyl Nuclear Power Plant accident is used to illustrate the application of the models for prediction and classification problems. This fallout is a unique case study that provides the challenging task of analyzing huge amounts of data ('hard' direct measurements, as well as supplementary information and expert estimates) and solving particular decision-oriented problems.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Significant progress has been made with regard to the quantitative integration of geophysical and hydrological data at the local scale. However, extending the corresponding approaches to the regional scale represents a major, and as-of-yet largely unresolved, challenge. To address this problem, we have developed a downscaling procedure based on a non-linear Bayesian sequential simulation approach. The basic objective of this algorithm is to estimate the value of the sparsely sampled hydraulic conductivity at non-sampled locations based on its relation to the electrical conductivity, which is available throughout the model space. The in situ relationship between the hydraulic and electrical conductivities is described through a non-parametric multivariate kernel density function. This method is then applied to the stochastic integration of low-resolution, re- gional-scale electrical resistivity tomography (ERT) data in combination with high-resolution, local-scale downhole measurements of the hydraulic and electrical conductivities. Finally, the overall viability of this downscaling approach is tested and verified by performing and comparing flow and transport simulation through the original and the downscaled hydraulic conductivity fields. Our results indicate that the proposed procedure does indeed allow for obtaining remarkably faithful estimates of the regional-scale hydraulic conductivity structure and correspondingly reliable predictions of the transport characteristics over relatively long distances.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Avalanche forecasting is a complex process involving the assimilation of multiple data sources to make predictions over varying spatial and temporal resolutions. Numerically assisted forecasting often uses nearest neighbour methods (NN), which are known to have limitations when dealing with high dimensional data. We apply Support Vector Machines to a dataset from Lochaber, Scotland to assess their applicability in avalanche forecasting. Support Vector Machines (SVMs) belong to a family of theoretically based techniques from machine learning and are designed to deal with high dimensional data. Initial experiments showed that SVMs gave results which were comparable with NN for categorical and probabilistic forecasts. Experiments utilising the ability of SVMs to deal with high dimensionality in producing a spatial forecast show promise, but require further work.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper evaluates the reception of Léon Walras' ideas in Russia before 1920. Despite an unfavourable institutional context, Walras was read by Russian economists. On the one hand, Bortkiewicz and Winiarski, who lived outside Russia and had the opportunity to meet and correspond with Walras, were first class readers and very good ambassadors for Walras' ideas, while on the other, the economists living in Russia were more selective in their readings. They restricted themselves to Walras' Elements of Pure Economics, in particular, its theory of exchange, while ignoring its theory of production. We introduce a cultural argument to explain their selective reading. JEL classification numbers: B 13, B 19.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Although cross-sectional diffusion tensor imaging (DTI) studies revealed significant white matter changes in mild cognitive impairment (MCI), the utility of this technique in predicting further cognitive decline is debated. Thirty-five healthy controls (HC) and 67 MCI subjects with DTI baseline data were neuropsychologically assessed at one year. Among them, there were 40 stable (sMCI; 9 single domain amnestic, 7 single domain frontal, 24 multiple domain) and 27 were progressive (pMCI; 7 single domain amnestic, 4 single domain frontal, 16 multiple domain). Fractional anisotropy (FA) and longitudinal, radial, and mean diffusivity were measured using Tract-Based Spatial Statistics. Statistics included group comparisons and individual classification of MCI cases using support vector machines (SVM). FA was significantly higher in HC compared to MCI in a distributed network including the ventral part of the corpus callosum, right temporal and frontal pathways. There were no significant group-level differences between sMCI versus pMCI or between MCI subtypes after correction for multiple comparisons. However, SVM analysis allowed for an individual classification with accuracies up to 91.4% (HC versus MCI) and 98.4% (sMCI versus pMCI). When considering the MCI subgroups separately, the minimum SVM classification accuracy for stable versus progressive cognitive decline was 97.5% in the multiple domain MCI group. SVM analysis of DTI data provided highly accurate individual classification of stable versus progressive MCI regardless of MCI subtype, indicating that this method may become an easily applicable tool for early individual detection of MCI subjects evolving to dementia.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

(from the journal abstract) Scientific interest for the concept of alliance has been maintained and stimulated by repeated findings that a strong alliance is associated with facilitative treatment process and favourable treatment outcome. However, because the alliance is not in itself a therapeutic technique, these findings were unsuccessful in bringing about significant improvements in clinical practice. An essential issue in modern psychotherapeutic research concerns the relation between common factors which are known to explain great variance in empirical results and the specific therapeutic techniques which are the primary basis of clinical training and practice. This pilot study explored sequences in therapist interventions over four sessions of brief psychodynamic investigation. It aims at determining if patterns of interventions can be found during brief psychodynamic investigation and if these patterns can be associated with differences in the therapeutic alliance. Therapist interventions where coded using the Psychodynamic Intervention Rating Scale (PIRS) which enables the classification of each therapist utterance into one of 9 categories of interpretive interventions (defence interpretation, transference interpretation), supportive interventions (question, clarification, association, reflection, supportive strategy) or interventions about the therapeutic frame (work-enhancing statement, contractual arrangement). Data analysis was done using lag sequential analysis, a statistical procedure which identifies contingent relationships in time among a large number of behaviours. The sample includes N = 20 therapist-patient dyads assigned to three groups with: (1) a high and stable alliance profile, (2) a low and stable alliance profile and (3) an improving alliance profile. Results suggest that therapists most often have one single intention when interacting with patients. Large sequences of questions, associations and clarifications were found, which indicate that if a therapist asks a question, clarifies or associates, there is a significant probability that he will continue doing so. A single theme sequence involving frame interventions was also observed. These sequences were found in all three alliance groups. One exception was found for mixed sequences of interpretations and supportive interventions. The simultaneous use of these two interventions was associated with a high or an improving alliance over the course of treatment, but not with a low and stable alliance where only single theme sequences of interpretations were found. In other words, in this last group, therapists were either supportive or interpretative, whereas with high or improving alliance, interpretations were always given along with supportive interventions. This finding provides evidence that examining therapist interpretation individually can only yield incomplete findings. How interpretations were given is important for alliance building. It also suggests that therapists should carefully dose their interpretations and be supportive when necessary in order to build a strong therapeutic alliance. And from a research point of view, to study technical interventions, we must look into dynamic variables such as dosage, the supportive quality of an intervention, and timing. (PsycINFO Database Record (c) 2005 APA, all rights reserved)