977 resultados para Fixed-point theorem


Relevância:

80.00% 80.00%

Publicador:

Resumo:

In this work the fundamental ideas to study properties of QFTs with the functional Renormalization Group are presented and some examples illustrated. First the Wetterich equation for the effective average action and its flow in the local potential approximation (LPA) for a single scalar field is derived. This case is considered to illustrate some techniques used to solve the RG fixed point equation and study the properties of the critical theories in D dimensions. In particular the shooting methods for the ODE equation for the fixed point potential as well as the approach which studies a polynomial truncation with a finite number of couplings, which is convenient to study the critical exponents. We then study novel cases related to multi field scalar theories, deriving the flow equations for the LPA truncation, both without assuming any global symmetry and also specialising to cases with a given symmetry, using truncations based on polynomials of the symmetry invariants. This is used to study possible non perturbative solutions of critical theories which are extensions of known perturbative results, obtained in the epsilon expansion below the upper critical dimension.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We prove a Goldstone theorem in thermal relativistic quantum field theory, which relates spontaneous symmetry breaking to the rate of spacelike decay of the two-point function. The critical rate of fall-off coincides with that of the massless free scalar field theory. Related results and open problems are briefly discussed. (C) 2011 American Institute of Physics. [doi:10.1063/1.3526961]

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Consider a discrete locally finite subset Gamma of R(d) and the cornplete graph (Gamma, E), with vertices Gamma and edges E. We consider Gibbs measures on the set of sub-graphs with vertices Gamma and edges E` subset of E. The Gibbs interaction acts between open edges having a vertex in common. We study percolation properties of the Gibbs distribution of the graph ensemble. The main results concern percolation properties of the open edges in two cases: (a) when Gamma is sampled from a homogeneous Poisson process; and (b) for a fixed Gamma with sufficiently sparse points. (c) 2010 American Institute of Physics. [doi:10.1063/1.3514605]

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Let P be a linear partial differential operator with analytic coefficients. We assume that P is of the form ""sum of squares"", satisfying Hormander's bracket condition. Let q be a characteristic point; for P. We assume that q lies on a symplectic Poisson stratum of codimension two. General results of Okaji Show that P is analytic hypoelliptic at q. Hence Okaji has established the validity of Treves' conjecture in the codimension two case. Our goal here is to give a simple, self-contained proof of this fact.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A Work Project, presented as part of the requirements for the Award of a Masters Degree in Management from the NOVA – School of Business and Economics

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The main aim of this short paper is to advertize the Koosis theorem in the mathematical community, especially among those who study orthogonal polynomials. We (try to) do this by proving a new theorem about asymptotics of orthogonal polynomi- als for which the Koosis theorem seems to be the most natural tool. Namely, we consider the case when a SzegÄo measure on the unit circumference is perturbed by an arbitrary measure inside the unit disk and an arbitrary Blaschke sequence of point masses outside the unit disk.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

It has been recently found that a number of systems displaying crackling noise also show a remarkable behavior regarding the temporal occurrence of successive events versus their size: a scaling law for the probability distributions of waiting times as a function of a minimum size is fulfilled, signaling the existence on those systems of self-similarity in time-size. This property is also present in some non-crackling systems. Here, the uncommon character of the scaling law is illustrated with simple marked renewal processes, built by definition with no correlations. Whereas processes with a finite mean waiting time do not fulfill a scaling law in general and tend towards a Poisson process in the limit of very high sizes, processes without a finite mean tend to another class of distributions, characterized by double power-law waiting-time densities. This is somehow reminiscent of the generalized central limit theorem. A model with short-range correlations is not able to escape from the attraction of those limit distributions. A discussion on open problems in the modeling of these properties is provided.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We present an envelope theorem for establishing first-order conditions in decision problems involving continuous and discrete choices. Our theorem accommodates general dynamic programming problems, even with unbounded marginal utilities. And, unlike classical envelope theorems that focus only on differentiating value functions, we accommodate other endogenous functions such as default probabilities and interest rates. Our main technical ingredient is how we establish the differentiability of a function at a point: we sandwich the function between two differentiable functions from above and below. Our theory is widely applicable. In unsecured credit models, neither interest rates nor continuation values are globally differentiable. Nevertheless, we establish an Euler equation involving marginal prices and values. In adjustment cost models, we show that first-order conditions apply universally, even if optimal policies are not (S,s). Finally, we incorporate indivisible choices into a classic dynamic insurance analysis.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Mitochondrial (M) and lipid droplet (L) volume density (vd) are often used in exercise research. Vd is the volume of muscle occupied by M and L. The means of calculating these percents are accomplished by applying a grid to a 2D image taken with transmission electron microscopy; however, it is not known which grid best predicts these values. PURPOSE: To determine the grid with the least variability of Mvd and Lvd in human skeletal muscle. METHODS: Muscle biopsies were taken from vastus lateralis of 10 healthy adults, trained (N=6) and untrained (N=4). Samples of 5-10mg were fixed in 2.5% glutaraldehyde and embedded in EPON. Longitudinal sections of 60 nm were cut and 20 images were taken at random at 33,000x magnification. Vd was calculated as the number of times M or L touched two intersecting grid lines (called a point) divided by the total number of points using 3 different sizes of grids with squares of 1000x1000nm sides (corresponding to 1µm2), 500x500nm (0.25µm2) and 250x250nm (0.0625µm2). Statistics included coefficient of variation (CV), 1 way-BS ANOVA and spearman correlations. RESULTS: Mean age was 67 ± 4 yo, mean VO2peak 2.29 ± 0.70 L/min and mean BMI 25.1 ± 3.7 kg/m2. Mean Mvd was 6.39% ± 0.71 for the 1000nm squares, 6.01% ± 0.70 for the 500nm and 6.37% ± 0.80 for the 250nm. Lvd was 1.28% ± 0.03 for the 1000nm, 1.41% ± 0.02 for the 500nm and 1.38% ± 0.02 for the 250nm. The mean CV of the three grids was 6.65% ±1.15 for Mvd with no significant differences between grids (P>0.05). Mean CV for Lvd was 13.83% ± 3.51, with a significant difference between the 1000nm squares and the two other grids (P<0.05). The 500nm squares grid showed the least variability between subjects. Mvd showed a positive correlation with VO2peak (r = 0.89, p < 0.05) but not with weight, height, or age. No correlations were found with Lvd. CONCLUSION: Different size grids have different variability in assessing skeletal muscle Mvd and Lvd. The grid size of 500x500nm (240 points) was more reliable than 1000x1000nm (56 points). 250x250nm (1023 points) did not show better reliability compared with the 500x500nm, but was more time consuming. Thus, choosing a grid with square size of 500x500nm seems the best option. This is particularly relevant as most grids used in the literature are either 100 points or 400 points without clear information on their square size.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We study the singular Bott-Chern classes introduced by Bismut, Gillet and Soulé. Singular Bott-Chern classes are the main ingredient to define direct images for closed immersions in arithmetic K-theory. In this paper we give an axiomatic definition of a theory of singular Bott-Chern classes, study their properties, and classify all possible theories of this kind. We identify the theory defined by Bismut, Gillet and Soulé as the only one that satisfies the additional condition of being homogeneous. We include a proof of the arithmetic Grothendieck-Riemann-Roch theorem for closed immersions that generalizes a result of Bismut, Gillet and Soulé and was already proved by Zha. This result can be combined with the arithmetic Grothendieck-Riemann-Roch theorem for submersions to extend this theorem to arbitrary projective morphisms. As a byproduct of this study we obtain two results of independent interest. First, we prove a Poincaré lemma for the complex of currents with fixed wave front set, and second we prove that certain direct images of Bott-Chern classes are closed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In the forensic examination of DNA mixtures, the question of how to set the total number of contributors (N) presents a topic of ongoing interest. Part of the discussion gravitates around issues of bias, in particular when assessments of the number of contributors are not made prior to considering the genotypic configuration of potential donors. Further complication may stem from the observation that, in some cases, there may be numbers of contributors that are incompatible with the set of alleles seen in the profile of a mixed crime stain, given the genotype of a potential contributor. In such situations, procedures that take a single and fixed number contributors as their output can lead to inferential impasses. Assessing the number of contributors within a probabilistic framework can help avoiding such complication. Using elements of decision theory, this paper analyses two strategies for inference on the number of contributors. One procedure is deterministic and focuses on the minimum number of contributors required to 'explain' an observed set of alleles. The other procedure is probabilistic using Bayes' theorem and provides a probability distribution for a set of numbers of contributors, based on the set of observed alleles as well as their respective rates of occurrence. The discussion concentrates on mixed stains of varying quality (i.e., different numbers of loci for which genotyping information is available). A so-called qualitative interpretation is pursued since quantitative information such as peak area and height data are not taken into account. The competing procedures are compared using a standard scoring rule that penalizes the degree of divergence between a given agreed value for N, that is the number of contributors, and the actual value taken by N. Using only modest assumptions and a discussion with reference to a casework example, this paper reports on analyses using simulation techniques and graphical models (i.e., Bayesian networks) to point out that setting the number of contributors to a mixed crime stain in probabilistic terms is, for the conditions assumed in this study, preferable to a decision policy that uses categoric assumptions about N.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background: Variable definitions of outcome (Constant score, Simple Shoulder Test [SST]) have been used to assess outcome after shoulder treatment, although none has been accepted as the universal standard. Physicians lack an objective method to reliably assess the activity of their patients in dynamic conditions. Our purpose was to clinically validate the shoulder kinematic scores given by a portable movement analysis device, using the activities of daily living described in the SST as a reference. The secondary objective was to determine whether this device could be used to document the effectiveness of shoulder treatments (for glenohumeral osteoarthritis and rotator cuff disease) and detect early failures.Methods: A clinical trial including 34 patients and a control group of 31 subjects over an observation period of 1 year was set up. Evaluations were made at baseline and 3, 6, and 12 months after surgery by 2 independent observers. Miniature sensors (3-dimensional gyroscopes and accelerometers) allowed kinematic scores to be computed. They were compared with the regular outcome scores: SST; Disabilities of the Arm, Shoulder and Hand; American Shoulder and Elbow Surgeons; and Constant.Results: Good to excellent correlations (0.61-0.80) were found between kinematics and clinical scores. Significant differences were found at each follow-up in comparison with the baseline status for all the kinematic scores (P < .015). The kinematic scores were able to point out abnormal patient outcomes at the first postoperative follow-up.Conclusion: Kinematic scores add information to the regular outcome tools. They offer an effective way to measure the functional performance of patients with shoulder pathology and have the potential to detect early treatment failures.Level of evidence: Level II, Development of Diagnostic Criteria, Diagnostic Study. (C) 2011 Journal of Shoulder and Elbow Surgery Board of Trustees.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The front form and the point form of dynamics are studied in the framework of predictive relativistic mechanics. The non-interaction theorem is proved when a Poincar-invariant Hamiltonian formulation with canonical position coordinates is required.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

NifA is the transcriptional activator of the nif genes in Proteobacteria. It is usually regulated by nitrogen and oxygen, allowing biological nitrogen fixation to occur under appropriate conditions. NifA proteins have a typical three-domain structure, including a regulatory N-terminal GAF domain, which is involved in control by fixed nitrogen and not strictly required for activity, a catalytic AAA+ central domain, which catalyzes open complex formation, and a C-terminal domain involved in DNA-binding. In Herbaspirillum seropedicae, a β-proteobacterium capable of colonizing Graminae of agricultural importance, NifA regulation by ammonium involves its N-terminal GAF domain and the signal transduction protein GlnK. When the GAF domain is removed, the protein can still activate nif genes transcription; however, ammonium regulation is lost. In this work, we generated eight constructs resulting in point mutations in H. seropedicae NifA and analyzed their effect on nifH transcription in Escherichia coli and H. seropedicae. Mutations K22V, T160E, M161V, L172R, and A215D resulted in inactive proteins. Mutations Q216I and S220I produced partially active proteins with activity control similar to wild-type NifA. However, mutation G25E, located in the GAF domain, resulted in an active protein that did not require GlnK for activity and was partially sensitive to ammonium. This suggested that G25E may affect the negative interaction between the N-terminal GAF domain and the catalytic central domain under high ammonium concentrations, thus rendering the protein constitutively active, or that G25E could lead to a conformational change comparable with that when GlnK interacts with the GAF domain.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The literature on agency suggests different implications for the use of export intermediaries. However, only few studies provide a view on import intermediaries. This thesis tries for its part to fill this research gap by studying the import intermediaries in the EU–Russia trade from a Russian industrial company’s point of view. The aim is to describe import intermediation and explain the need for import intermediary companies in the EU–Russia trade. The theoretical framework of this thesis originates from an article by Peng and York (2001), in which they study the performance of export intermediaries. This thesis applies resource-based theory, transaction cost theory and agency cost theory, following the idea of Peng and York. The resource-based theory approach is utilised for describing an ideal import intermediary company, and transaction cost theory provides a basis for understanding the benefits of using the services of import intermediary companies, while agency cost theory is applied in order to understand the risks the Russian industrial company faces when it decides to use the services of import intermediaries. The study is performed in the form of a case interview with a representative of a major Russian metallurgy company. The results of the study suggest that an ideal intermediary has the skills required specifically for the imports process, in order to save time and money of the principal company. The intermediary company helps reducing the amount of time the managers and the staff of the principal company use to make imports possible, thus reducing the salary costs and providing the possibility to concentrate on the company’s core competencies. The benefits of using the services of import intermediary companies are the reduced transaction costs, especially salary costs that are minimised because of the effectiveness and specialisation of import intermediaries. Intermediaries are specialised in the imports process and thus need less time and resources to organise the imports. They also help to reduce the fixed salary costs, because their services can be used only when needed. The risks of being misled by intermediaries are minimised by the competition on the import intermediary market. In case an intermediary attempts fraud, it gets replaced by its rival.