986 resultados para Stochastic Approach


Relevância:

100.00% 100.00%

Publicador:

Resumo:

We present a stochastic approach for solving the quantum-kinetic equation introduced in Part I. A Monte Carlo method based on backward time evolution of the numerical trajectories is developed. The computational complexity and the stochastic error are investigated numerically. Variance reduction techniques are applied, which demonstrate a clear advantage with respect to the approaches based on symmetry transformation. Parallel implementation is realized on a GRID infrastructure.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Introduction: According to the ecological view, coordination establishes byvirtueof social context. Affordances thought of as situational opportunities to interact are assumed to represent the guiding principles underlying decisions involved in interpersonal coordination. It’s generally agreed that affordances are not an objective part of the (social) environment but that they depend on the constructive perception of involved subjects. Theory and empirical data hold that cognitive operations enabling domain-specific efficacy beliefs are involved in the perception of affordances. The aim of the present study was to test the effects of these cognitive concepts in the subjective construction of local affordances and their influence on decision making in football. Methods: 71 football players (M = 24.3 years, SD = 3.3, 21 % women) from different divisions participated in the study. Participants were presented scenarios of offensive game situations. They were asked to take the perspective of the person on the ball and to indicate where they would pass the ball from within each situation. The participants stated their decisions in two conditions with different game score (1:0 vs. 0:1). The playing fields of all scenarios were then divided into ten zones. For each zone, participants were asked to rate their confidence in being able to pass the ball there (self-efficacy), the likelihood of the group staying in ball possession if the ball were passed into the zone (group-efficacy I), the likelihood of the ball being covered safely by a team member (pass control / group-efficacy II), and whether a pass would establish a better initial position to attack the opponents’ goal (offensive convenience). Answers were reported on visual analog scales ranging from 1 to 10. Data were analyzed specifying general linear models for binomially distributed data (Mplus). Maximum likelihood with non-normality robust standard errors was chosen to estimate parameters. Results: Analyses showed that zone- and domain-specific efficacy beliefs significantly affected passing decisions. Because of collinearity with self-efficacy and group-efficacy I, group-efficacy II was excluded from the models to ease interpretation of the results. Generally, zones with high values in the subjective ratings had a higher probability to be chosen as passing destination (βself-efficacy = 0.133, p < .001, OR = 1.142; βgroup-efficacy I = 0.128, p < .001, OR = 1.137; βoffensive convenience = 0.057, p < .01, OR = 1.059). There were, however, characteristic differences in the two score conditions. While group-efficacy I was the only significant predictor in condition 1 (βgroup-efficacy I = 0.379, p < .001), only self-efficacy and offensive convenience contributed to passing decisions in condition 2 (βself-efficacy = 0.135, p < .01; βoffensive convenience = 0.120, p < .001). Discussion: The results indicate that subjectively distinct attributes projected to playfield zones affect passing decisions. The study proposes a probabilistic alternative to Lewin’s (1951) hodological and deterministic field theory and enables insight into how dimensions of the psychological landscape afford passing behavior. Being part of a team, this psychological landscape is not only constituted by probabilities that refer to the potential and consequences of individual behavior, but also to that of the group system of which individuals are part of. Hence, in regulating action decisions in group settings, informers are extended to aspects referring to the group-level. References: Lewin, K. (1951). In D. Cartwright (Ed.), Field theory in social sciences: Selected theoretical papers by Kurt Lewin. New York: Harper & Brothers.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

2002 Mathematics Subject Classification: 65C05

Relevância:

100.00% 100.00%

Publicador:

Resumo:

2000 Mathematics Subject Classification: 62P99, 68T50

Relevância:

70.00% 70.00%

Publicador:

Resumo:

It is a well known result that the Feynman's path integral (FPI) approach to quantum mechanics is equivalent to Schrodinger's equation when we use as integration measure the Wiener-Lebesgue measure. This results in little practical applicability due to the great algebraic complexibity involved, and the fact is that almost all applications of (FPI) - ''practical calculations'' - are done using a Riemann measure. In this paper we present an expansion to all orders in time of FPI in a quest for a representation of the latter solely in terms of differentiable trajetories and Riemann measure. We show that this expansion agrees with a similar expansion obtained from Schrodinger's equation only up to first order in a Riemann integral context, although by chance both expansions referred to above agree for the free. particle and harmonic oscillator cases. Our results permit, from the mathematical point of view, to estimate the many errors done in ''practical'' calculations of the FPI appearing in the literature and, from the physical point of view, our results supports the stochastic approach to the problem.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

Over the past few years, the field of global optimization has been very active, producing different kinds of deterministic and stochastic algorithms for optimization in the continuous domain. These days, the use of evolutionary algorithms (EAs) to solve optimization problems is a common practice due to their competitive performance on complex search spaces. EAs are well known for their ability to deal with nonlinear and complex optimization problems. Differential evolution (DE) algorithms are a family of evolutionary optimization techniques that use a rather greedy and less stochastic approach to problem solving, when compared to classical evolutionary algorithms. The main idea is to construct, at each generation, for each element of the population a mutant vector, which is constructed through a specific mutation operation based on adding differences between randomly selected elements of the population to another element. Due to its simple implementation, minimum mathematical processing and good optimization capability, DE has attracted attention. This paper proposes a new approach to solve electromagnetic design problems that combines the DE algorithm with a generator of chaos sequences. This approach is tested on the design of a loudspeaker model with 17 degrees of freedom, for showing its applicability to electromagnetic problems. The results show that the DE algorithm with chaotic sequences presents better, or at least similar, results when compared to the standard DE algorithm and other evolutionary algorithms available in the literature.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

This report presents the development of a Stochastic Knock Detection (SKD) method for combustion knock detection in a spark-ignition engine using a model based design approach. Knock Signal Simulator (KSS) was developed as the plant model for the engine. The KSS as the plant model for the engine generates cycle-to-cycle accelerometer knock intensities following a stochastic approach with intensities that are generated using a Monte Carlo method from a lognormal distribution whose parameters have been predetermined from engine tests and dependent upon spark-timing, engine speed and load. The lognormal distribution has been shown to be a good approximation to the distribution of measured knock intensities over a range of engine conditions and spark-timings for multiple engines in previous studies. The SKD method is implemented in Knock Detection Module (KDM) which processes the knock intensities generated by KSS with a stochastic distribution estimation algorithm and outputs estimates of high and low knock intensity levels which characterize knock and reference level respectively. These estimates are then used to determine a knock factor which provides quantitative measure of knock level and can be used as a feedback signal to control engine knock. The knock factor is analyzed and compared with a traditional knock detection method to detect engine knock under various engine operating conditions. To verify the effectiveness of the SKD method, a knock controller was also developed and tested in a model-in-loop (MIL) system. The objective of the knock controller is to allow the engine to operate as close as possible to its border-line spark-timing without significant engine knock. The controller parameters were tuned to minimize the cycle-to-cycle variation in spark timing and the settling time of the controller in responding to step increase in spark advance resulting in the onset of engine knock. The simulation results showed that the combined system can be used adequately to model engine knock and evaluated knock control strategies for a wide range of engine operating conditions.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

With the observation that stochasticity is important in biological systems, chemical kinetics have begun to receive wider interest. While the use of Monte Carlo discrete event simulations most accurately capture the variability of molecular species, they become computationally costly for complex reaction-diffusion systems with large populations of molecules. On the other hand, continuous time models are computationally efficient but they fail to capture any variability in the molecular species. In this study a hybrid stochastic approach is introduced for simulating reaction-diffusion systems. We developed an adaptive partitioning strategy in which processes with high frequency are simulated with deterministic rate-based equations, and those with low frequency using the exact stochastic algorithm of Gillespie. Therefore the stochastic behavior of cellular pathways is preserved while being able to apply it to large populations of molecules. We describe our method and demonstrate its accuracy and efficiency compared with the Gillespie algorithm for two different systems. First, a model of intracellular viral kinetics with two steady states and second, a compartmental model of the postsynaptic spine head for studying the dynamics of Ca+2 and NMDA receptors.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

For analysing financial time series two main opposing viewpoints exist, either capital markets are completely stochastic and therefore prices follow a random walk, or they are deterministic and consequently predictable. For each of these views a great variety of tools exist with which it can be tried to confirm the hypotheses. Unfortunately, these methods are not well suited for dealing with data characterised in part by both paradigms. This thesis investigates these two approaches in order to model the behaviour of financial time series. In the deterministic framework methods are used to characterise the dimensionality of embedded financial data. The stochastic approach includes here an estimation of the unconditioned and conditional return distributions using parametric, non- and semi-parametric density estimation techniques. Finally, it will be shown how elements from these two approaches could be combined to achieve a more realistic model for financial time series.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

Integrated supplier selection and order allocation is an important decision for both designing and operating supply chains. This decision is often influenced by the concerned stakeholders, suppliers, plant operators and customers in different tiers. As firms continue to seek competitive advantage through supply chain design and operations they aim to create optimized supply chains. This calls for on one hand consideration of multiple conflicting criteria and on the other hand consideration of uncertainties of demand and supply. Although there are studies on supplier selection using advanced mathematical models to cover a stochastic approach, multiple criteria decision making techniques and multiple stakeholder requirements separately, according to authors' knowledge there is no work that integrates these three aspects in a common framework. This paper proposes an integrated method for dealing with such problems using a combined Analytic Hierarchy Process-Quality Function Deployment (AHP-QFD) and chance constrained optimization algorithm approach that selects appropriate suppliers and allocates orders optimally between them. The effectiveness of the proposed decision support system has been demonstrated through application and validation in the bioenergy industry.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Applied econometricians often fail to impose economic regularity constraints in the exact form economic theory prescribes. We show how the Singular Value Decomposition (SVD) Theorem and Markov Chain Monte Carlo (MCMC) methods can be used to rigorously impose time- and firm-varying equality and inequality constraints. To illustrate the technique we estimate a system of translog input demand functions subject to all the constraints implied by economic theory, including observation-varying symmetry and concavity constraints. Results are presented in the form of characteristics of the estimated posterior distributions of functions of the parameters. Copyright (C) 2001 John Wiley Sons, Ltd.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

There are two principal chemical concepts that are important for studying the naturalenvironment. The first one is thermodynamics, which describes whether a system is atequilibrium or can spontaneously change by chemical reactions. The second main conceptis how fast chemical reactions (kinetics or rate of chemical change) take place wheneverthey start. In this work we examine a natural system in which both thermodynamics andkinetic factors are important in determining the abundance of NH+4 , NO−2 and NO−3 insuperficial waters. Samples were collected in the Arno Basin (Tuscany, Italy), a system inwhich natural and antrophic effects both contribute to highly modify the chemical compositionof water. Thermodynamical modelling based on the reduction-oxidation reactionsinvolving the passage NH+4 -& NO−2 -& NO−3 in equilibrium conditions has allowed todetermine the Eh redox potential values able to characterise the state of each sample and,consequently, of the fluid environment from which it was drawn. Just as pH expressesthe concentration of H+ in solution, redox potential is used to express the tendency of anenvironment to receive or supply electrons. In this context, oxic environments, as thoseof river systems, are said to have a high redox potential because O2 is available as anelectron acceptor.Principles of thermodynamics and chemical kinetics allow to obtain a model that oftendoes not completely describe the reality of natural systems. Chemical reactions may indeedfail to achieve equilibrium because the products escape from the site of the rectionor because reactions involving the trasformation are very slow, so that non-equilibriumconditions exist for long periods. Moreover, reaction rates can be sensitive to poorly understoodcatalytic effects or to surface effects, while variables as concentration (a largenumber of chemical species can coexist and interact concurrently), temperature and pressurecan have large gradients in natural systems. By taking into account this, data of 91water samples have been modelled by using statistical methodologies for compositionaldata. The application of log–contrast analysis has allowed to obtain statistical parametersto be correlated with the calculated Eh values. In this way, natural conditions in whichchemical equilibrium is hypothesised, as well as underlying fast reactions, are comparedwith those described by a stochastic approach

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Il semble y avoir des attentes réciproques non comblées en formation initiale à l’enseignement des mathématiques. Cherchant à comprendre la genèse de ces attentes, nous nous sommes intéressée à la vision que les étudiants nourrissent des phénomènes d’enseignement. Ayant postulé que les étudiants ont une vision déterministe de ces phénomènes, et considérant que leur anticipation oriente leur projet de formation, nous nous sommes attaquée au problème de la rencontre des projets des étudiants et des formateurs. Deux objectifs généraux ont été formulés : le premier concerne la description des projets de formation des étudiants tandis que le second concerne l’expérimentation d’une séquence de situations susceptible de faire évoluer leurs projets. Cette recherche a été menée auprès de 58 étudiants du baccalauréat en enseignement en adaptation scolaire et sociale d’une même université, lesquels entamaient leur formation initiale à l’enseignement des mathématiques. Afin d’explorer les projets qu’ils nourrissent a priori, tous les étudiants ont complété un questionnaire individuel sur leur vision des mathématiques et de leur enseignement et ont participé à une première discussion de groupe sur le sujet. Une séquence de situations probabilistes leur a ensuite été présentée afin d’induire une complexification de leur projet. Enfin, cette expérimentation a été suivie d’une seconde discussion de groupe et complétée par la réalisation de huit entretiens individuels. Il a été mis en évidence que la majorité des étudiants rencontrés souhaitent avant tout évoluer en tant qu’enseignant, en développant leur capacité à enseigner et à faire apprendre ou comprendre les mathématiques. Bien que certaines visées se situent dans une perspective transmissive, celles-ci ne semblent pas représentatives de l’ensemble des projets "visée". De plus, même si la plupart des étudiants rencontrés projettent de développer des connaissances relatives aux techniques et aux méthodes d’enseignement, la sensibilité à la complexité dont certains projets témoignent ne permet plus de réduire les attentes des étudiants à l’endroit de leur formation à la simple constitution d’un répertoire de techniques d’enseignement réputées efficaces. En ce qui a trait aux modes d’anticipation relevés a priori, nos résultats mettent en relief des anticipations se rattachant d’abord à un mode adaptatif, puis à un mode prévisionnel. Aucune anticipation se rattachant à un mode prospectif n’a été recensée a priori. La séquence a permis aux étudiants de s’engager dans une dialectique d’action, de formulation et de validation, elle les a incités à recourir à une approche stochastique ainsi qu’à porter un jugement de probabilité qui prenne en compte la complexité de la situation. A posteriori, nous avons observé que les projets "visée" de certains étudiants se sont complexifiés. Nous avons également noté un élargissement de la majorité des projets, lesquels considèrent désormais les autres sommets du triangle didactique. Enfin, des anticipations se rattachant à tous les modes d’anticipation ont été relevées. Des anticipations réalisées grâce à un mode prospectif permettent d’identifier des zones d’incertitude et de liberté sur lesquelles il est possible d’agir afin d’accroître la sensibilité à la complexité des situations professionnelles à l’intérieur desquelles les futurs enseignants devront se situer.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The thesis begins with a review of basic elements of general theory of relativity (GTR) which forms the basis for the theoretical interpretation of the observations in cosmology. The first chapter also discusses the standard model in cosmology, namely the Friedmann model, its predictions and problems. We have also made a brief discussion on fractals and inflation of the early universe in the first chapter. In the second chapter we discuss the formulation of a new approach to cosmology namely a stochastic approach. In this model, the dynam ics of the early universe is described by a set of non-deterministic, Langevin type equations and we derive the solutions using the Fokker—Planck formalism. Here we demonstrate how the problems with the standard model, can be eliminated by introducing the idea of stochastic fluctuations in the early universe. Many recent observations indicate that the present universe may be approximated by a many component fluid and we assume that only the total energy density is conserved. This, in turn, leads to energy transfer between different components of the cosmic fluid and fluctuations in such energy transfer can certainly induce fluctuations in the mean to factor in the equation of state p = wp, resulting in a fluctuating expansion rate for the universe. The third chapter discusses the stochastic evolution of the cosmological parameters in the early universe, using the new approach. The penultimate chapter is about the refinements to be made in the present model, by means of a new deterministic model The concluding chapter presents a discussion on other problems with the conventional cosmology, like fractal correlation of galactic distribution. The author attempts an explanation for this problem using the stochastic approach.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

A Parts of Speech tagger for Malayalam which uses a stochastic approach has been proposed. The tagger makes use of word frequencies and bigram statistics from a corpus. The morphological analyzer is used to generate a tagged corpus due to the unavailability of an annotated corpus in Malayalam. Although the experiments have been performed on a very small corpus, the results have shown that the statistical approach works well with a highly agglutinative language like Malayalam