957 resultados para Standard model
Resumo:
Interpersonal interaction in public goods contexts is very different in character to its depiction in economic theory, despite the fact that the standard model is based on a small number of apparently plausible assumptions. Approaches to the problem are reviewed both from within and outside economics. It is argued that quick fixes such as a taste for giving do not provide a way forward. An improved understanding of why people contribute to such goods seems to require a different picture of the relationships between individuals than obtains in standard microeconomic theory, where they are usually depicted as asocial. No single economic model at present is consistent with all the relevant field and laboratory data. It is argued that there are defensible ideas from outside the discipline which ought to be explored, relying on different conceptions of rationality and/or more radically social agents. Three such suggestions are considered, one concerning the expressive/communicative aspect of behaviour, a second the possibility of a part-whole relationship between interacting agents and the third a version of conformism.
Resumo:
In this paper the origin and evolution of the Sun’s open magnetic flux is considered by conducting magnetic flux transport simulations over many solar cycles. The simulations include the effects of differential rotation, meridional flow and supergranular diffusion on the radial magnetic field at the surface of the Sun as new magnetic bipoles emerge and are transported poleward. In each cycle the emergence of roughly 2100 bipoles is considered. The net open flux produced by the surface distribution is calculated by constructing potential coronal fields with a source surface from the surface distribution at regular intervals. In the simulations the net open magnetic flux closely follows the total dipole component at the source surface and evolves independently from the surface flux. The behaviour of the open flux is highly dependent on meridional flow and many observed features are reproduced by the model. However, when meridional flow is present at observed values the maximum value of the open flux occurs at cycle minimum when the polar caps it helps produce are the strongest. This is inconsistent with observations by Lockwood, Stamper and Wild (1999) and Wang, Sheeley, and Lean (2000) who find the open flux peaking 1–2 years after cycle maximum. Only in unrealistic simulations where meridional flow is much smaller than diffusion does a maximum in open flux consistent with observations occur. It is therefore deduced that there is no realistic parameter range of the flux transport variables that can produce the correct magnitude variation in open flux under the present approximations. As a result the present standard model does not contain the correct physics to describe the evolution of the Sun’s open magnetic flux over an entire solar cycle. Future possible improvements in modeling are suggested.
Resumo:
In general, particle filters need large numbers of model runs in order to avoid filter degeneracy in high-dimensional systems. The recently proposed, fully nonlinear equivalent-weights particle filter overcomes this requirement by replacing the standard model transition density with two different proposal transition densities. The first proposal density is used to relax all particles towards the high-probability regions of state space as defined by the observations. The crucial second proposal density is then used to ensure that the majority of particles have equivalent weights at observation time. Here, the performance of the scheme in a high, 65 500 dimensional, simplified ocean model is explored. The success of the equivalent-weights particle filter in matching the true model state is shown using the mean of just 32 particles in twin experiments. It is of particular significance that this remains true even as the number and spatial variability of the observations are changed. The results from rank histograms are less easy to interpret and can be influenced considerably by the parameter values used. This article also explores the sensitivity of the performance of the scheme to the chosen parameter values and the effect of using different model error parameters in the truth compared with the ensemble model runs.
Resumo:
We analyze the potential of the CERN Large Hadron Collider running at 7 TeV to search for deviations from the Standard Model predictions for the triple gauge boson coupling ZW(+)W(-) assuming an integrated luminosity of 1 fb(-1). We show that the study of W(+)W(-) and W(+/-)Z productions, followed by the leptonic decay of the weak gauge bosons can improve the present sensitivity on the anomalous couplings Delta g(1)(Z), Delta kappa(Z), lambda(Z), g(4)(Z), and (lambda) over bar (Z) at the 2 sigma level. (C) 2010 Elsevier B.V. All rights reserved.
Resumo:
Topological interactions will be generated in theories with compact extra dimensions where fermionic chiral zero modes have different localizations. This is the case in many warped extra dimension models where the right-handed top quark is typically localized away from the left-handed one. Using deconstruction techniques, we study the topological interactions in these models. These interactions appear as trilinear and quadrilinear gauge boson couplings in low energy effective theories with three or more sites, as well as in the continuum limit. We derive the form of these interactions for various cases, including examples of Abelian, non-Abelian and product gauge groups of phenomenological interest. The topological interactions provide a window into the more fundamental aspects of these theories and could result in unique signatures at the Large Hadron Collider, some of which we explore.
Resumo:
We show that the S parameter is not finite in theories of electroweak symmetry breaking in a slice of anti-de Sitter five-dimensional space, with the light fermions localized in the ultraviolet. We compute the one-loop contributions to S from the Higgs sector and show that they are logarithmically dependent on the cutoff of the theory. We discuss the renormalization of S, as well as the implications for bounds from electroweak precision measurements on these models. We argue that, although in principle the choice of renormalization condition could eliminate the S parameter constraint, a more consistent condition would still result in a large and positive S. On the other hand, we show that the dependence on the Higgs mass in S can be entirely eliminated by the renormalization procedure, making it impossible in these theories to extract a Higgs mass bound from electroweak precision constraints.
Resumo:
We study the collider phenomenology of bilinear R-parity violating supergravity, the simplest effective model for supersymmetric neutrino masses accounting for the current neutrino oscillation data. At the CERN Large Hadron Collider the center-of-mass energy will be high enough to probe directly these models through the search for the superpartners of the Standard Model (SM) particles. We analyze the impact of R-parity violation on the canonical supersymmetry searches-that is, we examine how the decay of the lightest supersymmetric particle (LSP) via bilinear R-parity violating interactions degrades the average expected missing momentum of the reactions and show how this diminishes the reach in the usual channels for supersymmetry searches. However, the R-parity violating interactions lead to an enhancement of the final states containing isolated same-sign di-leptons and trileptons, compensating the reach loss in the fully inclusive channel. We show how the searches for displaced vertices associated to LSP decay substantially increase the coverage in supergravity parameter space, giving the corresponding reaches for two reference luminosities of 10 and 100 fb(-1) and compare with those of the R-parity conserving minimal supergravity model.
Resumo:
The aim of this paper is to develop a flexible model for analysis of quantitative trait loci (QTL) in outbred line crosses, which includes both additive and dominance effects. Our flexible intercross analysis (FIA) model accounts for QTL that are not fixed within founder lines and is based on the variance component framework. Genome scans with FIA are performed using a score statistic, which does not require variance component estimation. RESULTS: Simulations of a pedigree with 800 F2 individuals showed that the power of FIA including both additive and dominance effects was almost 50% for a QTL with equal allele frequencies in both lines with complete dominance and a moderate effect, whereas the power of a traditional regression model was equal to the chosen significance value of 5%. The power of FIA without dominance effects included in the model was close to those obtained for FIA with dominance for all simulated cases except for QTL with overdominant effects. A genome-wide linkage analysis of experimental data from an F2 intercross between Red Jungle Fowl and White Leghorn was performed with both additive and dominance effects included in FIA. The score values for chicken body weight at 200 days of age were similar to those obtained in FIA analysis without dominance. CONCLUSION: We have extended FIA to include QTL dominance effects. The power of FIA was superior, or similar, to standard regression methods for QTL effects with dominance. The difference in power for FIA with or without dominance is expected to be small as long as the QTL effects are not overdominant. We suggest that FIA with only additive effects should be the standard model to be used, especially since it is more computationally efficient.
Resumo:
Recent investigations of various quantum-gravity theories have revealed a variety of possible mechanisms that lead to Lorentz violation. One of the more elegant of these mechanisms is known as Spontaneous Lorentz Symmetry Breaking (SLSB), where a vector or tensor field acquires a nonzero vacuum expectation value. As a consequence of this symmetry breaking, massless Nambu-Goldstone modes appear with properties similar to the photon in Electromagnetism. This thesis considers the most general class of vector field theories that exhibit spontaneous Lorentz violation-known as bumblebee models-and examines their candidacy as potential alternative explanations of E&M, offering the possibility that Einstein-Maxwell theory could emerge as a result of SLSB rather than of local U(1) gauge invariance. With this aim we employ Dirac's Hamiltonian Constraint Analysis procedure to examine the constraint structures and degrees of freedom inherent in three candidate bumblebee models, each with a different potential function, and compare these results to those of Electromagnetism. We find that none of these models share similar constraint structures to that of E&M, and that the number of degrees of freedom for each model exceeds that of Electromagnetism by at least two, pointing to the potential existence of massive modes or propagating ghost modes in the bumblebee theories.
Resumo:
O objetivo deste artigo é verificar a influência de variáveis políticas na determinação da taxa de câmbio em quatro países latino-americanos que conviveram com elevada inflação e déficit em Transações Correntes nas décadas de setenta e oitenta. Estudos empíricos já haviam demonstrado a influência das eleições. Nenhum, porém, havia incorporado a estrutura de decisão do Executivo e Legislativo neste processo. Só foi possível incorporar o regime político (Autoritário/Democrático) e a divisão de poder no Legislativo de todos os países num modelo standard de taxa de câmbio porque utilizamos a técnica de painel. Obtivemos os seguintes resultados: países classificados como Autoritários apresentaram uma taxa de câmbio mais valorizada e Legislativos mais fragmentados apresentaram uma taxa de câmbio mais desvalorizada. Vimos este último resultado com desconfiança uma vez que, entre os países da amostra, o regime Autoritário era, em alguns casos, uma ditadura militar e o Legislativo pouco intervia nas decisões. Interagimos o regime político com fragmentação e percebemos que o efeito da classificação do regime predomina. No caso, se existir um regime Autoritário, o câmbio resultante da interação ainda será valorizado. A divisão de poder no Legislativo apenas provoca uma redução no impacto da valorização.
Resumo:
Esta tese de Doutorado é dedicada ao estudo de instabilidade financeira e dinâmica em Teoria Monet ária. E demonstrado que corridas banc árias são eliminadas sem custos no modelo padrão de teoria banc ária quando a popula ção não é pequena. É proposta uma extensão em que incerteza agregada é mais severa e o custo da estabilidade financeira é relevante. Finalmente, estabelece-se otimalidade de transições na distribui ção de moeda em economias em que oportunidades de trocas são escassas e heterogêneas. Em particular, otimalidade da inflação depende dos incentivos dinâmicos proporcionados por tais transi ções. O capí tulo 1 estabelece o resultado de estabilidade sem custos para economias grandes ao estudar os efeitos do tamanho populacional na an álise de corridas banc árias de Peck & Shell. No capí tulo 2, otimalidade de dinâmica é estudada no modelo de monet ário de Kiyotaki & Wright quando a sociedade é capaz de implementar uma polí tica inflacion ária. Apesar de adotar a abordagem de desenho de mecanismos, este capí tulo faz um paralelo com a an álise de Sargent & Wallace (1981) ao destacar efeitos de incentivos dinâmicos sobre a interação entre as polí ticas monet ária e fiscal. O cap ítulo 3 retoma o tema de estabilidade fi nanceira ao quanti car os custos envolvidos no desenho ótimo de um setor bancário à prova de corridas e ao propor uma estrutura informacional alternativa que possibilita bancos insolventes. A primeira an álise mostra que o esquema de estabilidade ótima exibe altas taxas de juros de longo prazo e a segunda que monitoramento imperfeito pode levar a corridas bancárias com insolvência.
Resumo:
O modelo racional de decisão tem sido objeto de estudo constante na academia de vários países, contribuindo para evolução do ser racional como importante tomador de decisão. A evolução destes estudos tem aberto questionamentos quanto à capacidade de racionalidade que temos como tomadores de decisão, deleitando assim em várias teorias novas que pesquisam estas limitações no decidir. Especialmente aplicadas a teorias econômicas, estudos como Inteligência Artificial, Contabilidade Mental, Teoria dos Prospectos, Teoria dos Jogos entre outras se destacam neste cenário de estudo das finanças comportamentais. A contabilidade como ferramenta de apoio as decisões financeiras ocupa posição de destaque. Esta tem em seu escopo de trabalho normas (aquilo que deveria ser feito) que regulam sua atuação, em alguns casos esta regulamentação não é precisa em suas especificações, deixando janelas que levam seus profissionais a erros de interpretação. A imprecisão contábil pode causar viés em suas classificações. Os profissionais, deparados com este legado podem se utilizar de heurísticas para interpretar da melhor maneira possível os acontecimentos que são registrados na contabilidade. Este trabalho tem a intenção de análise de alguns pontos que consideramos importantes quando temos imprecisão contábil, respondendo as seguintes perguntas: a imprecisão de normas contábeis causa viés na decisão? O profissional que se depara com imprecisão contábil se utiliza de Heurística para decidir? Quais os erros mais comuns de interpretação sob incerteza contábil? Para que o assunto fosse abordado com imparcialidade de maneira a absorver retamente quais são as experiências dos profissionais que atuam na área contábil, foi elaborado um questionário composto por uma situação possível que leva o respondente a um ambiente de tomada de decisões que envolva a prática contábil. O questionário era dividido em duas partes principais, com a preocupação de identificar através das respostas se existe imprecisão contábil (sob a luz do princípio da prudência) e quais heurísticas que os respondentes se utilizam com mais freqüência, sendo o mesmo aplicado em profissionais que atuam na área contábil e que detenham experiências profissionais relacionadas à elaboração, auditoria ou análise de demonstrações contábeis. O questionário aplicado na massa respondente determinou, através das respostas, que existe, segundo os profissionais, interpretações diferentes para os mesmos dados, caracterizando assim zona cinzenta, segundo Penno (2008), ou seja, interpretações que podem ser mais agressivas ou mais conservadoras conforme a interpretação do profissional. Já quanto às estratégias simplificadoras, ou heurísticas, que causam algum tipo de enviesamento no processo decisório, alguns foram identificadas como: associações pressupostas, interpretação errada da chance, regressão a media e eventos disjuntivos e eventos conjuntivos, que reforçam a pesquisa dando indícios de que os respondentes podem estar tomando decisões enviesadas. Porém, não se identificou no estudo tomada de decisões com enviesamentos como recuperabilidade e insensibilidades ao tamanho da amostra. Ao final do estudo concluímos que os respondentes têm interpretações diferenciadas sobre o mesmo assunto, mesmo sob a luz do princípio contábil da prudência, e ainda se utilizam de estratégias simplificadoras para resolverem assuntos quotidianos.
Resumo:
The one which is considered the standard model of theory change was presented in [AGM85] and is known as the AGM model. In particular, that paper introduced the class of partial meet contractions. In subsequent works several alternative constructive models for that same class of functions were presented, e.g.: safe/kernel contractions ([AM85, Han94]), system of spheres-based contractions ([Gro88]) and epistemic entrenchment-based contractions ([G ar88, GM88]). Besides, several generalizations of such model were investigated. In that regard we emphasise the presentation of models which accounted for contractions by sets of sentences rather than only by a single sentence, i.e. multiple contractions. However, until now, only two of the above mentioned models have been generalized in the sense of addressing the case of contractions by sets of sentences: The partial meet multiple contractions were presented in [Han89, FH94], while the kernel multiple contractions were introduced in [FSS03]. In this thesis we propose two new constructive models of multiple contraction functions, namely the system of spheres-based and the epistemic entrenchment-based multiple contractions which generalize the models of system of spheres-based and of epistemic entrenchment-based contractions, respectively, to the case of contractions (of theories) by sets of sentences. Furthermore, analogously to what is the case in what concerns the corresponding classes of contraction functions by one single sentence, those two classes are identical and constitute a subclass of the class of partial meet multiple contractions. Additionally, and as the rst step of the procedure that is here followed to obtain an adequate de nition for the system of spheres-based multiple contractions, we present a possible worlds semantics for the partial meet multiple contractions analogous to the one proposed in [Gro88] for the partial meet contractions (by one single sentence). Finally, we present yet an axiomatic characterization for the new class(es) of multiple contraction functions that are here introduced.
Resumo:
Our research aims to analyze some institutions of primary education in so-called First Republic in Natal/RN, when they were considered high standard institutions on training, dissemination and creation of national identity and republican traditions. Thus, we investigated to try to understand the creation of the new man and the invention of new traditions to confirm the status of republican modernity in two schools in Natal, the Colegio Americano, a private one, and a standard model of school, Augusto Severo, which is a public one. As a basis we have the history of institutions to analyze, paying close attention to consider the use of imitation in cultural patrimony as well as the use of strategies to distinguish. The concept of ownership follows, for present purposes, their focus of study on observation of diverse and contrasting use of these cultural objects, texts, readings and ideas from research institutions. For analysis of the link which occurs within the school environment, in every period of its history, we used the concept of school culture as a set of rules and practices which define knowledge to teach and conduct the introject. A culture that incorporates the school to keep a set with other religious cultures, political and popular of its time and space. In this sense, the educational institutions which we studied while showing what kind of in this work by preparing cultures, codes, different practices, and specific individuals they have, they were in important locations to provide modern cultural appropriation as a strategy for educational innovation and a factor of rationality and efficiency which could be observed and controlled, so gradually the modern school education was organized to produce its own society. As a challenge of affirmation and incorporating diverse social experiences to produce the modern, civilized man of the Republican time, the school, as part of the social life, which is singular in its practices, not only the set of reforms, decrees, laws and projects, but also as expressions of concept about life and society in terms of material, symbolic and cultural symbols in the social context in modernization. We focused on these two schools, because inside the wide cultural and material status of the city, they were the first republic schools which had the goal of having men and woman together culturally , with a view to adapting them to the modern movement to make them civilized / educated / rational . On this view, we would emphasize that this statement needs a reinvention as a new way through what is made at the schools which production of new spaces, practices, rites and what represents school, making and expressing a new identity, modern, different of the old symbols of the Empire. For this, nothing better than the organization of schooling, emphasizing on educating the individual and his/her responsibilities with the order and progress. We need to understand the past as a result of conflicts, including strengths and limitations within the historical and social context, and the invention of tradition as a process of formalization and ritualization of acts which want to perpetuate, as a reference to a group identity. These are practices and social educative representations which support the understanding of pedagogical and educational ideas at this historical moment, making a new way of being and doing in the Republican universe
Resumo:
Com o objetivo de estimar parâmetros genéticos para a produção de leite no dia do controle (PLDC), foram usadas as 2.440 primeiras lactações de vacas da raça Gir leiteira, com partos registrados entre 1990 e 2005. As PLDC foram consideradas em dez classes mensais e analisadas por meio de modelo de regressão aleatória (MRA) utilizando-se como efeitos aleatórios o genético-aditivo, o de ambiente permanente e o residual e, como efeitos fixos, o grupo de contemporâneos (GC), a co-variável idade da vaca ao parto (efeito linear e quadrático) e a curva média de lactação da população. Os efeitos genético-aditivos e de ambiente permanente foram modelados utilizando-se as funções de Wilmink (WIL) e Ali e Schaeffer (AS). As variâncias residuais foram modeladas utilizando-se 1, 4, 6 ou 10 classes. Os grupos de contemporâneos foram definidos como rebanho-ano-estação do controle contendo no mínimo três animais. Os testes indicaram que o modelo com quatro classes de variâncias usando a função paramétrica AS foi o que melhor se ajustou aos dados. As estimativas de herdabilidade variaram de 0,21 a 0,33 para a função AS e de 0,17 a 0,30 para WIL e foram maiores na primeira metade da lactação. As correlações genéticas entre as PLDC foram positivas e elevadas entre os controles adjacentes e diminuiram quando a distância entre os controles aumentou. Para o melhor modelo, foram estimados os valores genéticos para a produção de leite acumulada até os 305 dias e, para períodos parciais da lactação, foram obtidas como médias dos valores genéticos preditos naquele período. Os valores genéticos foram comparados, por meio da correlação de posto, ao valor genético predito para a produção acumulada até os 305 dias, pelo método tradicional. As correlações entre os valores genéticos indicaram que podem ocorrer divergências na classificação dos animais pelos critérios estudados.