939 resultados para Extreme Value Theory


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Pós-graduação em Educação - FFC

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In the framework of gauged flavour symmetries, new fermions in parity symmetric representations of the standard model are generically needed for the compensation of mixed anomalies. The key point is that their masses are also protected by flavour symmetries and some of them are expected to lie way below the flavour symmetry breaking scale(s), which has to occur many orders of magnitude above the electroweak scale to be compatible with the available data from flavour changing neutral currents and CP violation experiments. We argue that, actually, some of these fermions would plausibly get masses within the LHC range. If they are taken to be heavy quarks and leptons, in (bi)-fundamental representations of the standard model symmetries, their mixings with the light ones are strongly constrained to be very small by electroweak precision data. The alternative chosen here is to exactly forbid such mixings by breaking of flavour symmetries into an exact discrete symmetry, the so-called proton-hexality, primarily suggested to avoid proton decay. As a consequence of the large value needed for the flavour breaking scale, those heavy particles are long-lived and rather appropriate for the current and future searches at the LHC for quasi-stable hadrons and leptons. In fact, the LHC experiments have already started to look for them.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The studies in the present thesis focus on post-decision processes using the theoretical framework of Differentiation and Consolidation Theory. This thesis consists of three studies. In all these studies, pre-decision evaluations are compared with post-decision evaluations in order to explore differences in evaluations of decision alternatives before and after a decision. The main aim of the studies was to describe and gain a clearer and better understanding of how people re-evaluate information, following a decision for which they have experienced the decision and outcome. The studies examine how the attractiveness evaluations of important attributes are restructured from the pre-decision to the post-decision phase; particularly restructuring processes of value conflicts. Value conflict attributes are those in which information speaks against the chosen alternative in a decision. The first study investigates an important real-life decision and illustrates different post-decision (consolidation) processes following the decision. The second study tests whether decisions with value conflicts follow the same consolidation (post-decision restructuring) processes when the conflict is controlled experimentally, as in earlier studies of less controlled real-life decisions. The third study investigates consolidation and value conflicts in decisions in which the consequences are controlled and of different magnitudes. The studies in the present thesis have shown how attractiveness restructuring of attributes in conflict occurs in the post-decision phase. Results from the three studies indicated that attractiveness restructuring of attributes in conflict was stronger for important real-life decisions (Study 1) and in situations in which real consequences followed a decision (Study 3) than in more controlled, hypothetical decision situations (Study 2). Finally, some proposals for future research are suggested, including studies of the effects of outcomes and consequences on consolidation of prior decisions and how a decision maker’s involvement affects his or her pre- and post-decision processes.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This work deals with some classes of linear second order partial differential operators with non-negative characteristic form and underlying non- Euclidean structures. These structures are determined by families of locally Lipschitz-continuous vector fields in RN, generating metric spaces of Carnot- Carath´eodory type. The Carnot-Carath´eodory metric related to a family {Xj}j=1,...,m is the control distance obtained by minimizing the time needed to go from two points along piecewise trajectories of vector fields. We are mainly interested in the causes in which a Sobolev-type inequality holds with respect to the X-gradient, and/or the X-control distance is Doubling with respect to the Lebesgue measure in RN. This study is divided into three parts (each corresponding to a chapter), and the subject of each one is a class of operators that includes the class of the subsequent one. In the first chapter, after recalling “X-ellipticity” and related concepts introduced by Kogoj and Lanconelli in [KL00], we show a Maximum Principle for linear second order differential operators for which we only assume a Sobolev-type inequality together with a lower terms summability. Adding some crucial hypotheses on measure and on vector fields (Doubling property and Poincar´e inequality), we will be able to obtain some Liouville-type results. This chapter is based on the paper [GL03] by Guti´errez and Lanconelli. In the second chapter we treat some ultraparabolic equations on Lie groups. In this case RN is the support of a Lie group, and moreover we require that vector fields satisfy left invariance. After recalling some results of Cinti [Cin07] about this class of operators and associated potential theory, we prove a scalar convexity for mean-value operators of L-subharmonic functions, where L is our differential operator. In the third chapter we prove a necessary and sufficient condition of regularity, for boundary points, for Dirichlet problem on an open subset of RN related to sub-Laplacian. On a Carnot group we give the essential background for this type of operator, and introduce the notion of “quasi-boundedness”. Then we show the strict relationship between this notion, the fundamental solution of the given operator, and the regularity of the boundary points.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Die vorliegende Arbeit ist motiviert durch biologische Fragestellungen bezüglich des Verhaltens von Membranpotentialen in Neuronen. Ein vielfach betrachtetes Modell für spikende Neuronen ist das Folgende. Zwischen den Spikes verhält sich das Membranpotential wie ein Diffusionsprozess X der durch die SDGL dX_t= beta(X_t) dt+ sigma(X_t) dB_t gegeben ist, wobei (B_t) eine Standard-Brown'sche Bewegung bezeichnet. Spikes erklärt man wie folgt. Sobald das Potential X eine gewisse Exzitationsschwelle S überschreitet entsteht ein Spike. Danach wird das Potential wieder auf einen bestimmten Wert x_0 zurückgesetzt. In Anwendungen ist es manchmal möglich, einen Diffusionsprozess X zwischen den Spikes zu beobachten und die Koeffizienten der SDGL beta() und sigma() zu schätzen. Dennoch ist es nötig, die Schwellen x_0 und S zu bestimmen um das Modell festzulegen. Eine Möglichkeit, dieses Problem anzugehen, ist x_0 und S als Parameter eines statistischen Modells aufzufassen und diese zu schätzen. In der vorliegenden Arbeit werden vier verschiedene Fälle diskutiert, in denen wir jeweils annehmen, dass das Membranpotential X zwischen den Spikes eine Brown'sche Bewegung mit Drift, eine geometrische Brown'sche Bewegung, ein Ornstein-Uhlenbeck Prozess oder ein Cox-Ingersoll-Ross Prozess ist. Darüber hinaus beobachten wir die Zeiten zwischen aufeinander folgenden Spikes, die wir als iid Treffzeiten der Schwelle S von X gestartet in x_0 auffassen. Die ersten beiden Fälle ähneln sich sehr und man kann jeweils den Maximum-Likelihood-Schätzer explizit angeben. Darüber hinaus wird, unter Verwendung der LAN-Theorie, die Optimalität dieser Schätzer gezeigt. In den Fällen OU- und CIR-Prozess wählen wir eine Minimum-Distanz-Methode, die auf dem Vergleich von empirischer und wahrer Laplace-Transformation bezüglich einer Hilbertraumnorm beruht. Wir werden beweisen, dass alle Schätzer stark konsistent und asymptotisch normalverteilt sind. Im letzten Kapitel werden wir die Effizienz der Minimum-Distanz-Schätzer anhand simulierter Daten überprüfen. Ferner, werden Anwendungen auf reale Datensätze und deren Resultate ausführlich diskutiert.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This work contains several applications of the mode-coupling theory (MCT) and is separated into three parts. In the first part we investigate the liquid-glass transition of hard spheres for dimensions d→∞ analytically and numerically up to d=800 in the framework of MCT. We find that the critical packing fraction ϕc(d) scales as d²2^(-d), which is larger than the Kauzmann packing fraction ϕK(d) found by a small-cage expansion by Parisi and Zamponi [J. Stat. Mech.: Theory Exp. 2006, P03017 (2006)]. The scaling of the critical packing fraction is different from the relation ϕc(d)∼d2^(-d) found earlier by Kirkpatrick and Wolynes [Phys. Rev. A 35, 3072 (1987)]. This is due to the fact that the k dependence of the critical collective and self nonergodicity parameters fc(k;d) and fcs(k;d) was assumed to be Gaussian in the previous theories. We show that in MCT this is not the case. Instead fc(k;d) and fcs(k;d), which become identical in the limit d→∞, converge to a non-Gaussian master function on the scale k∼d^(3/2). We find that the numerically determined value for the exponent parameter λ and therefore also the critical exponents a and b depend on the dimension d, even at the largest evaluated dimension d=800. In the second part we compare the results of a molecular-dynamics simulation of liquid Lennard-Jones argon far away from the glass transition [D. Levesque, L. Verlet, and J. Kurkijärvi, Phys. Rev. A 7, 1690 (1973)] with MCT. We show that the agreement between theory and computer simulation can be improved by taking binary collisions into account [L. Sjögren, Phys. Rev. A 22, 2866 (1980)]. We find that an empiric prefactor of the memory function of the original MCT equations leads to similar results. In the third part we derive the equations for a mode-coupling theory for the spherical components of the stress tensor. Unfortunately it turns out that they are too complex to be solved numerically.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The main task of this work is to present a concise survey on the theory of certain function spaces in the contexts of Hörmander vector fields and Carnot Groups, and to discuss briefly an application to some polyharmonic boundary value problems on Carnot Groups of step 2.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis describes the ultra-precise determination of the g-factor of the electron bound to hydrogenlike 28Si13+. The experiment is based on the simultaneous determination of the cyclotron- and Larmor frequency of a single ion, which is stored in a triple Penning-trap setup. The continuous Stern-Gerlach effect is used to couple the spin of the bound electron to the motional frequencies of the ion via a magnetic bottle, which allows the non-destructive determination of the spin state. To this end, a highly sensitive, cryogenic detection system was developed, which allowed the direct, non-destructive detection of the eigenfrequencies with the required precision.rnThe development of a novel, phase sensitive detection technique finally allowed the determination of the g-factor with a relative accuracy of 40 ppt, which was previously inconceivable. The comparison of the hereby determined value with the value predicted by quantumelectrodynamics (QED) allows the verification of the validity of this fundamental theory under the extreme conditions of the strong binding potential of a highly charged ion. The exact agreement of theory and experiment is an impressive demonstration of the exactness of QED. The experimental possibilities created in this work will allow in the near future not only further tests of theory, but also the determination of the mass of the electron with a precision that exceeds the current literature value by more than an order of magnitude.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis aims at investigating a new approach to document analysis based on the idea of structural patterns in XML vocabularies. My work is founded on the belief that authors do naturally converge to a reasonable use of markup languages and that extreme, yet valid instances are rare and limited. Actual documents, therefore, may be used to derive classes of elements (patterns) persisting across documents and distilling the conceptualization of the documents and their components, and may give ground for automatic tools and services that rely on no background information (such as schemas) at all. The central part of my work consists in introducing from the ground up a formal theory of eight structural patterns (with three sub-patterns) that are able to express the logical organization of any XML document, and verifying their identifiability in a number of different vocabularies. This model is characterized by and validated against three main dimensions: terseness (i.e. the ability to represent the structure of a document with a small number of objects and composition rules), coverage (i.e. the ability to capture any possible situation in any document) and expressiveness (i.e. the ability to make explicit the semantics of structures, relations and dependencies). An algorithm for the automatic recognition of structural patterns is then presented, together with an evaluation of the results of a test performed on a set of more than 1100 documents from eight very different vocabularies. This language-independent analysis confirms the ability of patterns to capture and summarize the guidelines used by the authors in their everyday practice. Finally, I present some systems that work directly on the pattern-based representation of documents. The ability of these tools to cover very different situations and contexts confirms the effectiveness of the model.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We present an analysis of daily extreme precipitation events for the extended winter season (October–March) at 20 Mediterranean coastal sites covering the period 1950–2006. The heavy tailed behaviour of precipitation extremes and estimated return levels, including associated uncertainties, are derived applying a procedure based on the Generalized Pareto Distribution, in combination with recently developed methods. Precipitation extremes have an important contribution to make seasonal totals (approximately 60% for all series). Three stations (one in the western Mediterranean and the others in the eastern basin) have a 5-year return level above 100 mm, while the lowest value (estimated for two Italian series) is equal to 58 mm. As for the 50-year return level, an Italian station (Genoa) has the highest value of 264 mm, while the other values range from 82 to 200 mm. Furthermore, six series (from stations located in France, Italy, Greece, and Cyprus) show a significant negative tendency in the probability of observing an extreme event. The relationship between extreme precipitation events and the large scale atmospheric circulation at the upper, mid and low troposphere is investigated by using NCEP/NCAR reanalysis data. A 2-step classification procedure identifies three significant anomaly patterns both for the western-central and eastern part of the Mediterranean basin. In the western Mediterranean, the anomalous southwesterly surface to mid-tropospheric flow is connected with enhanced moisture transport from the Atlantic. During ≥5-year return level events, the subtropical jet stream axis is aligned with the African coastline and interacts with the eddy-driven jet stream. This is connected with enhanced large scale ascending motions, instability and leads to the development of severe precipitation events. For the eastern Mediterranean extreme precipitation events, the identified anomaly patterns suggest warm air advection connected with anomalous ascent motions and an increase of the low- to mid-tropospheric moisture. Furthermore, the jet stream position (during ≥5-year return level events) supports the eastern basin being in a divergence area, where ascent motions are favoured. Our results contribute to an improved understanding of daily precipitation extremes in the cold season and associated large scale atmospheric features.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Fully engaging in a new culture means translating oneself into a different set of cultural values, and many of the values can be foreign to the individual. The individual may face conflicting tensions between the psychological need to be a part of the new society and feelings of guilt or betrayal towards the former society, culture or self. Many international students from Myanmar, most of whom have little international experience, undergo this value and cultural translation during their time in American colleges. It is commonly assumed that something will be lost in the process of translation and that the students become more Westernized or never fit into both Myanmar and US cultures. However, the study of the narratives of the Myanmar students studying in the US reveals a more complex reality. Because individuals have multifaceted identities and many cultures and subcultures are fluctuating and intertwined with one another, the students¿ cross-cultural interactions can also help them acquire new ways of seeing things. Through their struggle to engage in the US college culture, many students display the theory of ¿cosmopolitanism¿ in their transformative identity formation process and thus, define and identify themselves beyond one set of cultural norms.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Mr. Pechersky set out to examine a specific feature of the employer-employee relationship in Russian business organisations. He wanted to study to what extent the so-called "moral hazard" is being solved (if it is being solved at all), whether there is a relationship between pay and performance, and whether there is a correlation between economic theory and Russian reality. Finally, he set out to construct a model of the Russian economy that better reflects the way it actually functions than do certain other well-known models (for example models of incentive compensation, the Shapiro-Stiglitz model etc.). His report was presented to the RSS in the form of a series of manuscripts in English and Russian, and on disc, with many tables and graphs. He begins by pointing out the different examples of randomness that exist in the relationship between employee and employer. Firstly, results are frequently affected by circumstances outside the employee's control that have nothing to do with how intelligently, honestly, and diligently the employee has worked. When rewards are based on results, uncontrollable randomness in the employee's output induces randomness in their incomes. A second source of randomness involves the outside events that are beyond the control of the employee that may affect his or her ability to perform as contracted. A third source of randomness arises when the performance itself (rather than the result) is measured, and the performance evaluation procedures include random or subjective elements. Mr. Pechersky's study shows that in Russia the third source of randomness plays an important role. Moreover, he points out that employer-employee relationships in Russia are sometimes opposite to those in the West. Drawing on game theory, he characterises the Western system as follows. The two players are the principal and the agent, who are usually representative individuals. The principal hires an agent to perform a task, and the agent acquires an information advantage concerning his actions or the outside world at some point in the game, i.e. it is assumed that the employee is better informed. In Russia, on the other hand, incentive contracts are typically negotiated in situations in which the employer has the information advantage concerning outcome. Mr. Pechersky schematises it thus. Compensation (the wage) is W and consists of a base amount, plus a portion that varies with the outcome, x. So W = a + bx, where b is used to measure the intensity of the incentives provided to the employee. This means that one contract will be said to provide stronger incentives than another if it specifies a higher value for b. This is the incentive contract as it operates in the West. The key feature distinguishing the Russian example is that x is observed by the employer but is not observed by the employee. So the employer promises to pay in accordance with an incentive scheme, but since the outcome is not observable by the employee the contract cannot be enforced, and the question arises: is there any incentive for the employer to fulfil his or her promises? Mr. Pechersky considers two simple models of employer-employee relationships displaying the above type of information symmetry. In a static framework the obtained result is somewhat surprising: at the Nash equilibrium the employer pays nothing, even though his objective function contains a quadratic term reflecting negative consequences for the employer if the actual level of compensation deviates from the expectations of the employee. This can lead, for example, to labour turnover, or the expenses resulting from a bad reputation. In a dynamic framework, the conclusion can be formulated as follows: the higher the discount factor, the higher the incentive for the employer to be honest in his/her relationships with the employee. If the discount factor is taken to be a parameter reflecting the degree of (un)certainty (the higher the degree of uncertainty is, the lower is the discount factor), we can conclude that the answer to the formulated question depends on the stability of the political, social and economic situation in a country. Mr. Pechersky believes that the strength of a market system with private property lies not just in its providing the information needed to compute an efficient allocation of resources in an efficient manner. At least equally important is the manner in which it accepts individually self-interested behaviour, but then channels this behaviour in desired directions. People do not have to be cajoled, artificially induced, or forced to do their parts in a well-functioning market system. Instead, they are simply left to pursue their own objectives as they see fit. Under the right circumstances, people are led by Adam Smith's "invisible hand" of impersonal market forces to take the actions needed to achieve an efficient, co-ordinated pattern of choices. The problem is that, as Mr. Pechersky sees it, there is no reason to believe that the circumstances in Russia are right, and the invisible hand is doing its work properly. Political instability, social tension and other circumstances prevent it from doing so. Mr. Pechersky believes that the discount factor plays a crucial role in employer-employee relationships. Such relationships can be considered satisfactory from a normative point of view, only in those cases where the discount factor is sufficiently large. Unfortunately, in modern Russia the evidence points to the typical discount factor being relatively small. This fact can be explained as a manifestation of aversion to risk of economic agents. Mr. Pechersky hopes that when political stabilisation occurs, the discount factors of economic agents will increase, and the agent's behaviour will be explicable in terms of more traditional models.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

High altitude periodic breathing (PB) shares some common pathophysiologic aspects with sleep apnea, Cheyne-Stokes respiration and PB in heart failure patients. Methods that allow quantifying instabilities of respiratory control provide valuable insights in physiologic mechanisms and help to identify therapeutic targets. Under the hypothesis that high altitude PB appears even during physical activity and can be identified in comparison to visual analysis in conditions of low SNR, this study aims to identify PB by characterizing the respiratory pattern through the respiratory volume signal. A number of spectral parameters are extracted from the power spectral density (PSD) of the volume signal, derived from respiratory inductive plethysmography and evaluated through a linear discriminant analysis. A dataset of 34 healthy mountaineers ascending to Mt. Muztagh Ata, China (7,546 m) visually labeled as PB and non periodic breathing (nPB) is analyzed. All climbing periods within all the ascents are considered (total climbing periods: 371 nPB and 40 PB). The best crossvalidated result classifying PB and nPB is obtained with Pm (power of the modulation frequency band) and R (ratio between modulation and respiration power) with an accuracy of 80.3% and area under the receiver operating characteristic curve of 84.5%. Comparing the subjects from 1(st) and 2(nd) ascents (at the same altitudes but the latter more acclimatized) the effect of acclimatization is evaluated. SaO(2) and periodic breathing cycles significantly increased with acclimatization (p-value < 0.05). Higher Pm and higher respiratory frequencies are observed at lower SaO(2), through a significant negative correlation (p-value < 0.01). Higher Pm is observed at climbing periods visually labeled as PB with > 5 periodic breathing cycles through a significant positive correlation (p-value < 0.01). Our data demonstrate that quantification of the respiratory volume signal using spectral analysis is suitable to identify effects of hypobaric hypoxia on control of breathing.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

One of the most influential statements in the anomie theory tradition has been Merton’s argument that the volume of instrumental property crime should be higher where there is a greater imbalance between the degree of commitment to monetary success goals and the degree of commitment to legitimate means of pursing such goals. Contemporary anomie theories stimulated by Merton’s perspective, most notably Messner and Rosenfeld’s institutional anomie theory, have expanded the scope conditions by emphasizing lethal criminal violence as an outcome to which anomie theory is highly relevant, and virtually all contemporary empirical studies have focused on applying the perspective to explaining spatial variation in homicide rates. In the present paper, we argue that current explications of Merton’s theory and IAT have not adequately conveyed the relevance of the core features of the anomie perspective to lethal violence. We propose an expanded anomie model in which an unbalanced pecuniary value system – the core causal variable in Merton’s theory and IAT – translates into higher levels of homicide primarily in indirect ways by increasing levels of firearm prevalence, drug market activity, and property crime, and by enhancing the degree to which these factors stimulate lethal outcomes. Using aggregate-level data collected during the mid-to-late 1970s for a sample of relatively large social aggregates within the U.S., we find a significant effect on homicide rates of an interaction term reflecting high levels of commitment to monetary success goals and low levels of commitment to legitimate means. Virtually all of this effect is accounted for by higher levels of property crime and drug market activity that occur in areas with an unbalanced pecuniary value system. Our analysis also reveals that property crime is more apt to lead to homicide under conditions of high levels of structural disadvantage. These and other findings underscore the potential value of elaborating the anomie perspective to explicitly account for lethal violence.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We examine the impact of identity preferences on the interrelation between incentives and performance measurement. In our model, a manager identifies with an organization and loses utility to the extent that his actions conflict with effort-standards issued by the principal. Contrary to prior arguments in the literature, we find conditions under which a manager who identifies strongly with the organization receives stronger incentives and faces more performance evaluation reports than a manager who does not identify with the organization. Our theory predicts that managers who experience events that boost their identification with the firm can decrease their effort in short-term value creation. We also find that firms are more likely to employ less precise but more congruent performance measures, such as stock prices, when contracting with managers who identify little with the organization. In contrast, they use more precise but less congruent measures, such as accounting earnings, when contracting with managers who identify strongly with the firm.