995 resultados para Statistical decision


Relevância:

20.00% 20.00%

Publicador:

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The Everett interpretation of quantum mechanics is an increasingly popular alternative to the traditional Copenhagen interpretation, but there are a few major issues that prevent the widespread adoption. One of these issues is the origin of probabilities in the Everett interpretation, which this thesis will attempt to survey. The most successful resolution of the probability problem thus far is the decision-theoretic program, which attempts to frame probabilities as outcomes of rational decision making. This marks a departure from orthodox interpretations of probabilities in the physical sciences, where probabilities are thought to be objective, stemming from symmetry considerations. This thesis will attempt to offer evaluations on the decision-theoretic program.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A review is presented of the statistical bootstrap model of Hagedorn and Frautschi. This model is an attempt to apply the methods of statistical mechanics in high-energy physics, while treating all hadron states (stable or unstable) on an equal footing. A statistical calculation of the resonance spectrum on this basis leads to an exponentially rising level density ρ(m) ~ cm-3 eβom at high masses.

In the present work, explicit formulae are given for the asymptotic dependence of the level density on quantum numbers, in various cases. Hamer and Frautschi's model for a realistic hadron spectrum is described.

A statistical model for hadron reactions is then put forward, analogous to the Bohr compound nucleus model in nuclear physics, which makes use of this level density. Some general features of resonance decay are predicted. The model is applied to the process of NN annihilation at rest with overall success, and explains the high final state pion multiplicity, together with the low individual branching ratios into two-body final states, which are characteristic of the process. For more general reactions, the model needs modification to take account of correlation effects. Nevertheless it is capable of explaining the phenomenon of limited transverse momenta, and the exponential decrease in the production frequency of heavy particles with their mass, as shown by Hagedorn. Frautschi's results on "Ericson fluctuations" in hadron physics are outlined briefly. The value of βo required in all these applications is consistently around [120 MeV]-1 corresponding to a "resonance volume" whose radius is very close to ƛπ. The construction of a "multiperipheral cluster model" for high-energy collisions is advocated.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Parallel trials form a most important part of the technique of scientific experimentation. Such trials may be divided into two; categories. In the first the results are comparable measurements of one kind or another. In the second the data consist of records of the number of times a certain 'event' has occurred in the two sets of trials compared. Only trials of the second category are dealt with here. In this paper all the reliable methods of testing for significance the results of parallel trials of a certain type with special reference to fishery research are described fully. Some sections relate to exact, others to approximate tests. The only advantage in the use of the latter lies in the fact that they are often the more expeditious. Apart from this it is always preferable to use exact methods.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Migrating to cloud computing is one of the current enterprise challenges. This technology provides a new paradigm based on "on-demand payment" for information and communication technologies. In this sense, the small and medium enterprise is supposed to be the most interested, since initial investments are avoided and the technology allows gradual implementation. However, even if the characteristics and capacities have been widely discussed, entry into the cloud is still lacking in terms of practical, real frameworks. This paper aims at filling this gap, presenting a real tool already implemented and tested, which can be used as a cloud computing adoption decision tool. This tool uses diagnosis based on specific questions to gather the required information and subsequently provide the user with valuable information to deploy the business within the cloud, specifically in the form of Software as a Service (SaaS) solutions. This information allows the decision makers to generate their particular Cloud Road. A pilot study has been carried out with enterprises at a local level with a two-fold objective: To ascertain the degree of knowledge on cloud computing and to identify the most interesting business areas and their related tools for this technology. As expected, the results show high interest and low knowledge on this subject and the tool presented aims to readdress this mismatch, insofar as possible. Copyright: © 2015 Bildosola et al. This is an open access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This work presents the basic elements for the analysis of decision under uncertainty: Expected Utility Theory and its citicisms and risk aversion and its measurement. The concepts of certainty equivalent, risk premium, absolute risk aversion and relative risk aversion, and the "more risk averse than" relation are discussed. The work is completed with several applications of decision making under uncertainty to different economic problems: investment in risky assets and portfolio selection, risk sharing, investment to reduce risk, insurance, taxes and income underreporting, deposit insurance and the value of information.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

English: We describe an age-structured statistical catch-at-length analysis (A-SCALA) based on the MULTIFAN-CL model of Fournier et al. (1998). The analysis is applied independently to both the yellowfin and the bigeye tuna populations of the eastern Pacific Ocean (EPO). We model the populations from 1975 to 1999, based on quarterly time steps. Only a single stock for each species is assumed for each analysis, but multiple fisheries that are spatially separate are modeled to allow for spatial differences in catchability and selectivity. The analysis allows for error in the effort-fishing mortality relationship, temporal trends in catchability, temporal variation in recruitment, relationships between the environment and recruitment and between the environment and catchability, and differences in selectivity and catchability among fisheries. The model is fit to total catch data and proportional catch-at-length data conditioned on effort. The A-SCALA method is a statistical approach, and therefore recognizes that the data collected from the fishery do not perfectly represent the population. Also, there is uncertainty in our knowledge about the dynamics of the system and uncertainty about how the observed data relate to the real population. The use of likelihood functions allow us to model the uncertainty in the data collected from the population, and the inclusion of estimable process error allows us to model the uncertainties in the dynamics of the system. The statistical approach allows for the calculation of confidence intervals and the testing of hypotheses. We use a Bayesian version of the maximum likelihood framework that includes distributional constraints on temporal variation in recruitment, the effort-fishing mortality relationship, and catchability. Curvature penalties for selectivity parameters and penalties on extreme fishing mortality rates are also included in the objective function. The mode of the joint posterior distribution is used as an estimate of the model parameters. Confidence intervals are calculated using the normal approximation method. It should be noted that the estimation method includes constraints and priors and therefore the confidence intervals are different from traditionally calculated confidence intervals. Management reference points are calculated, and forward projections are carried out to provide advice for making management decisions for the yellowfin and bigeye populations. Spanish: Describimos un análisis estadístico de captura a talla estructurado por edad, A-SCALA (del inglés age-structured statistical catch-at-length analysis), basado en el modelo MULTIFAN- CL de Fournier et al. (1998). Se aplica el análisis independientemente a las poblaciones de atunes aleta amarilla y patudo del Océano Pacífico oriental (OPO). Modelamos las poblaciones de 1975 a 1999, en pasos trimestrales. Se supone solamente una sola población para cada especie para cada análisis, pero se modelan pesquerías múltiples espacialmente separadas para tomar en cuenta diferencias espaciales en la capturabilidad y selectividad. El análisis toma en cuenta error en la relación esfuerzo-mortalidad por pesca, tendencias temporales en la capturabilidad, variación temporal en el reclutamiento, relaciones entre el medio ambiente y el reclutamiento y entre el medio ambiente y la capturabilidad, y diferencias en selectividad y capturabilidad entre pesquerías. Se ajusta el modelo a datos de captura total y a datos de captura a talla proporcional condicionados sobre esfuerzo. El método A-SCALA es un enfoque estadístico, y reconoce por lo tanto que los datos obtenidos de la pesca no representan la población perfectamente. Además, hay incertidumbre en nuestros conocimientos de la dinámica del sistema e incertidumbre sobre la relación entre los datos observados y la población real. El uso de funciones de verosimilitud nos permite modelar la incertidumbre en los datos obtenidos de la población, y la inclusión de un error de proceso estimable nos permite modelar las incertidumbres en la dinámica del sistema. El enfoque estadístico permite calcular intervalos de confianza y comprobar hipótesis. Usamos una versión bayesiana del marco de verosimilitud máxima que incluye constreñimientos distribucionales sobre la variación temporal en el reclutamiento, la relación esfuerzo-mortalidad por pesca, y la capturabilidad. Se incluyen también en la función objetivo penalidades por curvatura para los parámetros de selectividad y penalidades por tasas extremas de mortalidad por pesca. Se usa la moda de la distribución posterior conjunta como estimación de los parámetros del modelo. Se calculan los intervalos de confianza usando el método de aproximación normal. Cabe destacar que el método de estimación incluye constreñimientos y distribuciones previas y por lo tanto los intervalos de confianza son diferentes de los intervalos de confianza calculados de forma tradicional. Se calculan puntos de referencia para el ordenamiento, y se realizan proyecciones a futuro para asesorar la toma de decisiones para el ordenamiento de las poblaciones de aleta amarilla y patudo.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Esta é uma tese centrada nas estratégias empregadas pelos eleitores para o processamento das informações sobre a política, no contexto da campanha presidencial brasileira de 2006. Propusemos, neste trabalho, um modelo estatístico para o processamento da informação sobre a política, construído a partir da contribuição de estudos realizados nos campos de conhecimento das ciências sociais, da economia, da psicologia cognitiva e da comunicação, e, sobretudo, a partir das evidências extraídas de nosso desenho de pesquisa. Este combinou métodos qualitativo, quantitativo e a análise das estratégias retóricas empregadas por candidatos e partidos políticos no Horário Gratuito de Propaganda Eleitoral (HGPE), elemento dinâmico de nosso estudo, por sintetizar os fluxos de informação no ambiente das campanhas políticas. Esse conjunto de abordagens metodológicas, foi empregado para o estudo de caso do eleitor belo-horizontino, inserido no complexo ambiente informacional das campanhas presidenciais. Com informações incompletas, o eleitor precisou escolher em quem acreditar, lidando com a incerteza dos resultados do pleito e com a incerteza em relação ao comportamento futuro dos atores, cioso de que as retóricas da campanha estavam orientadas para a persuasão. O nosso trabalho procurou mapear as estratégias empregadas pelos eleitores na seleção de temas do debate para a atenção e para o processamento das novas informações sobre a política, adquiridas em interações múltiplas ao longo da campanha. Essa complexa tarefa foi destinada à escolha de por quem ser persuadido. Procuramos responder, neste trabalho, a partir das evidências empíricas, várias preocupações deste campo de conhecimento, entre elas: 1) Em meio a tantos temas abordados na disputa entre partidos e candidatos, quais deles e por que o indivíduo escolhe para prestar atenção e acreditar? 2) Que variáveis intermedeiam e qual o seu peso nesse processo de interação com as novas informações para explicar a tomada de decisão? 3) As prioridades da agenda política do eleitor se alteram ao longo da campanha? 4) Os eleitores ampliam o repertório mais geral de informação sobre a política? 5) As percepções sobre avaliação de governo e em relação aos temas prioritários da agenda do eleitor se alteram ao longo da campanha?