933 resultados para Calvo type model
Resumo:
We construct static soliton solutions with non-zero Hopf topological charges to a theory which is the extended Skyrme-Faddeev model with a further quartic term in derivatives. We use an axially symmetric ansatz based on toroidal coordinates, and solve the resulting two coupled nonlinear partial differential equations in two variables by a successive over-relaxation method. We construct numerical solutions with the Hopf charge up to 4. The solutions present an interesting behavior under the changes of a special combination of the coupling constants of the quartic terms.
Resumo:
We consider a four dimensional field theory with target space being CP(N) which constitutes a generalization of the usual Skyrme-Faddeev model defined on CP(1). We show that it possesses an integrable sector presenting an infinite number of local conservation laws, which are associated to the hidden symmetries of the zero curvature representation of the theory in loop space. We construct an infinite class of exact solutions for that integrable submodel where the fields are meromorphic functions of the combinations (x(1) + i x(2)) and (x(3) + x(0)) of the Cartesian coordinates of four dimensional Minkowski space-time. Among those solutions we have static vortices and also vortices with waves traveling along them with the speed of light. The energy per unity of length of the vortices show an interesting and intricate interaction among the vortices and waves.
Resumo:
An exactly solvable quantum field theory (QFT) model of Lee type is constructed to study how neutrino flavor eigenstates are created through interactions and how the localization properties of neutrinos follows from the parent particle that decays. The two-particle states formed by the neutrino and the accompanying charged lepton can be calculated exactly as well as their creation probabilities. We can show that the coherent creation of neutrino flavor eigenstates follows from the common negligible contribution of neutrino masses to their creation probabilities. on the other hand, it is shown that it is not possible to associate a well-defined flavor to coherent superpositions of charged leptons.
Resumo:
We introduce a Skyrme type, four-dimensional Euclidean field theory made of a triplet of scalar fields n→, taking values on the sphere S2, and an additional real scalar field φ, which is dynamical only on a three-dimensional surface embedded in R4. Using a special ansatz we reduce the 4d non-linear equations of motion into linear ordinary differential equations, which lead to the construction of an infinite number of exact soliton solutions with vanishing Euclidean action. The theory possesses a mass scale which fixes the size of the solitons in way which differs from Derrick's scaling arguments. The model may be relevant to the study of the low energy limit of pure SU(2) Yang-Mills theory. © 2004 Elsevier B.V. All rights reserved.
Resumo:
The unconditional expectation of social welfare is often used to assess alternative macroeconomic policy rules in applied quantitative research. It is shown that it is generally possible to derive a linear - quadratic problem that approximates the exact non-linear problem where the unconditional expectation of the objective is maximised and the steady-state is distorted. Thus, the measure of pol icy performance is a linear combinat ion of second moments of economic variables which is relatively easy to compute numerically, and can be used to rank alternative policy rules. The approach is applied to a simple Calvo-type model under various monetary policy rules.
Resumo:
This paper aims at contributing to the research agenda on the sources of price stickiness, showing that the adoption of nominal price rigidity may be an optimal firms' reaction to the consumers' behavior, even if firms have no adjustment costs. With regular broadly accepted assumptions on economic agents behavior, we show that firms' competition can lead to the adoption of sticky prices as an (sub-game perfect) equilibrium strategy. We introduce the concept of a consumption centers model economy in which there are several complete markets. Moreover, we weaken some traditional assumptions used in standard monetary policy models, by assuming that households have imperfect information about the ineflicient time-varying cost shocks faced by the firms, e.g. the ones regarding to inefficient equilibrium output leveIs under fiexible prices. Moreover, the timing of events are assumed in such a way that, at every period, consumers have access to the actual prices prevailing in the market only after choosing a particular consumption center. Since such choices under uncertainty may decrease the expected utilities of risk averse consumers, competitive firms adopt some degree of price stickiness in order to minimize the price uncertainty and fi attract more customers fi.'
Resumo:
Less is known about social welfare objectives when it is costly to change prices, as in Rotemberg (1982), compared with Calvo-type models. We derive a quadratic approximate welfare function around a distorted steady state for the costly price adjustment model. We highlight the similarities and differences to the Calvo setup. Both models imply inflation and output stabilization goals. It is explained why the degree of distortion in the economy influences inflation aversion in the Rotemberg framework in a way that differs from the Calvo setup.
Resumo:
This paper analyses the effects of tariffs on an international economy with a monopolistic sector with two firms, located in two countries, each one producing a homogeneous good for both home consumption and export to the other identical country. We consider a game among governments and firms. First, the government imposes a tariff on imports and then we consider the two types of moving: simultaneous (Cournot-type model) and sequential (Stackelberg-type model) decisions by the firms. We also compare the results obtained in each model.
Resumo:
La tomodensitométrie (TDM) est une technique d'imagerie pour laquelle l'intérêt n'a cessé de croitre depuis son apparition au début des années 70. De nos jours, l'utilisation de cette technique est devenue incontournable, grâce entre autres à sa capacité à produire des images diagnostiques de haute qualité. Toutefois, et en dépit d'un bénéfice indiscutable sur la prise en charge des patients, l'augmentation importante du nombre d'examens TDM pratiqués soulève des questions sur l'effet potentiellement dangereux des rayonnements ionisants sur la population. Parmi ces effets néfastes, l'induction de cancers liés à l'exposition aux rayonnements ionisants reste l'un des risques majeurs. Afin que le rapport bénéfice-risques reste favorable au patient il est donc nécessaire de s'assurer que la dose délivrée permette de formuler le bon diagnostic tout en évitant d'avoir recours à des images dont la qualité est inutilement élevée. Ce processus d'optimisation, qui est une préoccupation importante pour les patients adultes, doit même devenir une priorité lorsque l'on examine des enfants ou des adolescents, en particulier lors d'études de suivi requérant plusieurs examens tout au long de leur vie. Enfants et jeunes adultes sont en effet beaucoup plus sensibles aux radiations du fait de leur métabolisme plus rapide que celui des adultes. De plus, les probabilités des évènements auxquels ils s'exposent sont également plus grandes du fait de leur plus longue espérance de vie. L'introduction des algorithmes de reconstruction itératifs, conçus pour réduire l'exposition des patients, est certainement l'une des plus grandes avancées en TDM, mais elle s'accompagne de certaines difficultés en ce qui concerne l'évaluation de la qualité des images produites. Le but de ce travail est de mettre en place une stratégie pour investiguer le potentiel des algorithmes itératifs vis-à-vis de la réduction de dose sans pour autant compromettre la qualité du diagnostic. La difficulté de cette tâche réside principalement dans le fait de disposer d'une méthode visant à évaluer la qualité d'image de façon pertinente d'un point de vue clinique. La première étape a consisté à caractériser la qualité d'image lors d'examen musculo-squelettique. Ce travail a été réalisé en étroite collaboration avec des radiologues pour s'assurer un choix pertinent de critères de qualité d'image. Une attention particulière a été portée au bruit et à la résolution des images reconstruites à l'aide d'algorithmes itératifs. L'analyse de ces paramètres a permis aux radiologues d'adapter leurs protocoles grâce à une possible estimation de la perte de qualité d'image liée à la réduction de dose. Notre travail nous a également permis d'investiguer la diminution de la détectabilité à bas contraste associée à une diminution de la dose ; difficulté majeure lorsque l'on pratique un examen dans la région abdominale. Sachant que des alternatives à la façon standard de caractériser la qualité d'image (métriques de l'espace Fourier) devaient être utilisées, nous nous sommes appuyés sur l'utilisation de modèles d'observateurs mathématiques. Nos paramètres expérimentaux ont ensuite permis de déterminer le type de modèle à utiliser. Les modèles idéaux ont été utilisés pour caractériser la qualité d'image lorsque des paramètres purement physiques concernant la détectabilité du signal devaient être estimés alors que les modèles anthropomorphes ont été utilisés dans des contextes cliniques où les résultats devaient être comparés à ceux d'observateurs humain, tirant profit des propriétés de ce type de modèles. Cette étude a confirmé que l'utilisation de modèles d'observateurs permettait d'évaluer la qualité d'image en utilisant une approche basée sur la tâche à effectuer, permettant ainsi d'établir un lien entre les physiciens médicaux et les radiologues. Nous avons également montré que les reconstructions itératives ont le potentiel de réduire la dose sans altérer la qualité du diagnostic. Parmi les différentes reconstructions itératives, celles de type « model-based » sont celles qui offrent le plus grand potentiel d'optimisation, puisque les images produites grâce à cette modalité conduisent à un diagnostic exact même lors d'acquisitions à très basse dose. Ce travail a également permis de clarifier le rôle du physicien médical en TDM: Les métriques standards restent utiles pour évaluer la conformité d'un appareil aux requis légaux, mais l'utilisation de modèles d'observateurs est inévitable pour optimiser les protocoles d'imagerie. -- Computed tomography (CT) is an imaging technique in which interest has been quickly growing since it began to be used in the 1970s. Today, it has become an extensively used modality because of its ability to produce accurate diagnostic images. However, even if a direct benefit to patient healthcare is attributed to CT, the dramatic increase in the number of CT examinations performed has raised concerns about the potential negative effects of ionising radiation on the population. Among those negative effects, one of the major risks remaining is the development of cancers associated with exposure to diagnostic X-ray procedures. In order to ensure that the benefits-risk ratio still remains in favour of the patient, it is necessary to make sure that the delivered dose leads to the proper diagnosis without producing unnecessarily high-quality images. This optimisation scheme is already an important concern for adult patients, but it must become an even greater priority when examinations are performed on children or young adults, in particular with follow-up studies which require several CT procedures over the patient's life. Indeed, children and young adults are more sensitive to radiation due to their faster metabolism. In addition, harmful consequences have a higher probability to occur because of a younger patient's longer life expectancy. The recent introduction of iterative reconstruction algorithms, which were designed to substantially reduce dose, is certainly a major achievement in CT evolution, but it has also created difficulties in the quality assessment of the images produced using those algorithms. The goal of the present work was to propose a strategy to investigate the potential of iterative reconstructions to reduce dose without compromising the ability to answer the diagnostic questions. The major difficulty entails disposing a clinically relevant way to estimate image quality. To ensure the choice of pertinent image quality criteria this work was continuously performed in close collaboration with radiologists. The work began by tackling the way to characterise image quality when dealing with musculo-skeletal examinations. We focused, in particular, on image noise and spatial resolution behaviours when iterative image reconstruction was used. The analyses of the physical parameters allowed radiologists to adapt their image acquisition and reconstruction protocols while knowing what loss of image quality to expect. This work also dealt with the loss of low-contrast detectability associated with dose reduction, something which is a major concern when dealing with patient dose reduction in abdominal investigations. Knowing that alternative ways had to be used to assess image quality rather than classical Fourier-space metrics, we focused on the use of mathematical model observers. Our experimental parameters determined the type of model to use. Ideal model observers were applied to characterise image quality when purely objective results about the signal detectability were researched, whereas anthropomorphic model observers were used in a more clinical context, when the results had to be compared with the eye of a radiologist thus taking advantage of their incorporation of human visual system elements. This work confirmed that the use of model observers makes it possible to assess image quality using a task-based approach, which, in turn, establishes a bridge between medical physicists and radiologists. It also demonstrated that statistical iterative reconstructions have the potential to reduce the delivered dose without impairing the quality of the diagnosis. Among the different types of iterative reconstructions, model-based ones offer the greatest potential, since images produced using this modality can still lead to an accurate diagnosis even when acquired at very low dose. This work has clarified the role of medical physicists when dealing with CT imaging. The use of the standard metrics used in the field of CT imaging remains quite important when dealing with the assessment of unit compliance to legal requirements, but the use of a model observer is the way to go when dealing with the optimisation of the imaging protocols.
Resumo:
We study the problem of measuring the uncertainty of CGE (or RBC)-type model simulations associated with parameter uncertainty. We describe two approaches for building confidence sets on model endogenous variables. The first one uses a standard Wald-type statistic. The second approach assumes that a confidence set (sampling or Bayesian) is available for the free parameters, from which confidence sets are derived by a projection technique. The latter has two advantages: first, confidence set validity is not affected by model nonlinearities; second, we can easily build simultaneous confidence intervals for an unlimited number of variables. We study conditions under which these confidence sets take the form of intervals and show they can be implemented using standard methods for solving CGE models. We present an application to a CGE model of the Moroccan economy to study the effects of policy-induced increases of transfers from Moroccan expatriates.
Resumo:
A two-sector Ramsey-type model of growth is developed to investigate the relationship between agricultural productivity and economy-wide growth. The framework takes into account the peculiarities of agriculture both in production ( reliance on a fixed natural resource base) and in consumption (life-sustaining role and low income elasticity of food demand). The transitional dynamics of the model establish that when preferences respect Engel's law, the level and growth rate of agricultural productivity influence the speed of capital accumulation. A calibration exercise shows that a small difference in agricultural productivity has drastic implications for the rate and pattern of growth of the economy. Hence, low agricultural productivity can form a bottleneck limiting growth, because high food prices result in a low saving rate.
Resumo:
Topological frustration in an energetically unfrustrated off-lattice model of the helical protein fragment B of protein A from Staphylococcus aureus was investigated. This Gō-type model exhibited thermodynamic and kinetic signatures of a well-designed two-state folder with concurrent collapse and folding transitions and single exponential kinetics at the transition temperature. Topological frustration is determined in the absence of energetic frustration by the distribution of Fersht φ values. Topologically unfrustrated systems present a unimodal distribution sharply peaked at intermediate φ, whereas highly frustrated systems display a bimodal distribution peaked at low and high φ values. The distribution of φ values in protein A was determined both thermodynamically and kinetically. Both methods yielded a unimodal distribution centered at φ = 0.3 with tails extending to low and high φ values, indicating the presence of a small amount of topological frustration. The contacts with high φ values were located in the turn regions between helices I and II and II and III, intimating that these hairpins are in large part required in the transition state. Our results are in good agreement with all-atom simulations of protein A, as well as lattice simulations of a three- letter code 27-mer (which can be compared with a 60-residue helical protein). The relatively broad unimodal distribution of φ values obtained from the all-atom simulations and that from the minimalist model for the same native fold suggest that the structure of the transition state ensemble is determined mostly by the protein topology and not energetic frustration.
Resumo:
Zero valent iron (ZVI) has been extensively used as a reactive medium for the reduction of Cr(VI) to Cr(III) in reactive permeable barriers. The kinetic rate depends strongly on the superficial oxidation of the iron particles used and the preliminary washing of ZVI increases the rate. The reaction has been primarily modelled using a pseudo-first-order kinetics which is inappropriate for a heterogeneous reaction. We assumed a shrinking particle type model where the kinetic rate is proportional to the available iron surface area, to the initial volume of solution and to the chromium concentration raised to a power ˛ which is the order of the chemical reaction occurring at surface. We assumed α= 2/3 based on the likeness to the shrinking particle models with spherical symmetry. Kinetics studies were performed in order to evaluate the suitability of this approach. The influence of the following parameters was experimentally studied: initial available surface area, chromium concentration, temperature and pH. The assumed order for the reaction was confirmed. In addition, the rate constant was calculated from data obtained in different operating conditions. Digital pictures of iron balls were periodically taken and the image treatment allowed for establishing the time evolution of their size distribution.
Resumo:
We investigate in this note the dynamics of a one-dimensional Keller-Segel type model on the half-line. On the contrary to the classical configuration, the chemical production term is located on the boundary. We prove, under suitable assumptions, the following dichotomy which is reminiscent of the two-dimensional Keller-Segel system. Solutions are global if the mass is below the critical mass, they blow-up in finite time above the critical mass, and they converge to some equilibrium at the critical mass. Entropy techniques are presented which aim at providing quantitative convergence results for the subcritical case. This note is completed with a brief introduction to a more realistic model (still one-dimensional).
Resumo:
We present an experimental study of the premartensitic and martensitic phase transitions in a Ni2MnGa single crystal by using ultrasonic techniques. The effect of applied magnetic field and uniaxial compressive stress has been investigated. It has been found that they substantially modify the elastic and magnetic behavior of the alloy. These experimental findings are a consequence of magnetoelastic effects. The measured magnetic and vibrational behavior agrees with the predictions of a recently proposed Landau-type model [A. Planes et al., Phys. Rev. Lett. 79, 3926 (1997)] that incorporates a magnetoelastic coupling as a key ingredient.