927 resultados para Poisson model with common shocks
Resumo:
We consider the two Higgs doublet model extension of the standard model in the limit where all physical scalar particles are very heavy, too heavy, in fact, to be experimentally produced in forthcoming experiments. The symmetry-breaking sector can thus be described by an effective chiral Lagrangian. We obtain the values of the coefficients of the O(p4) operators relevant to the oblique corrections and investigate to what extent some nondecoupling effects may remain at low energies. A comparison with recent CERN LEP data shows that this model is indistinguishable from the standard model with one doublet and with a heavy Higgs boson, unless the scalar mass splittings are large.
Resumo:
We study the exact ground state of the two-dimensional random-field Ising model as a function of both the external applied field B and the standard deviation ¿ of the Gaussian random-field distribution. The equilibrium evolution of the magnetization consists in a sequence of discrete jumps. These are very similar to the avalanche behavior found in the out-of-equilibrium version of the same model with local relaxation dynamics. We compare the statistical distributions of magnetization jumps and find that both exhibit power-law behavior for the same value of ¿. The corresponding exponents are compared.
Resumo:
Using Monte Carlo simulations we study the dynamics of three-dimensional Ising models with nearest-, next-nearest-, and four-spin (plaquette) interactions. During coarsening, such models develop growing energy barriers, which leads to very slow dynamics at low temperature. As already reported, the model with only the plaquette interaction exhibits some of the features characteristic of ordinary glasses: strong metastability of the supercooled liquid, a weak increase of the characteristic length under cooling, stretched-exponential relaxation, and aging. The addition of two-spin interactions, in general, destroys such behavior: the liquid phase loses metastability and the slow-dynamics regime terminates well below the melting transition, which is presumably related with a certain corner-rounding transition. However, for a particular choice of interaction constants, when the ground state is strongly degenerate, our simulations suggest that the slow-dynamics regime extends up to the melting transition. The analysis of these models leads us to the conjecture that in the four-spin Ising model domain walls lose their tension at the glassy transition and that they are basically tensionless in the glassy phase.
Resumo:
The genomic era has revealed that the large repertoire of observed animal phenotypes is dependent on changes in the expression patterns of a finite number of genes, which are mediated by a plethora of transcription factors (TFs) with distinct specificities. The dimerization of TFs can also increase the complexity of a genetic regulatory network manifold, by combining a small number of monomers into dimers with distinct functions. Therefore, studying the evolution of these dimerizing TFs is vital for understanding how complexity increased during animal evolution. We focus on the second largest family of dimerizing TFs, the basic-region leucine zipper (bZIP), and infer when it expanded and how bZIP DNA-binding and dimerization functions evolved during the major phases of animal evolution. Specifically, we classify the metazoan bZIPs into 19 families and confirm the ancient nature of at least 13 of these families, predating the split of the cnidaria. We observe fixation of a core dimerization network in the last common ancestor of protostomes-deuterostomes. This was followed by an expansion of the number of proteins in the network, but no major dimerization changes in interaction partners, during the emergence of vertebrates. In conclusion, the bZIPs are an excellent model with which to understand how DNA binding and protein interactions of TFs evolved during animal evolution.
Resumo:
OBJECTIVE: This study aimed to assess the impact of individual comorbid conditions as well as the weight assignment, predictive properties and discriminating power of the Charlson Comorbidity Index (CCI) on outcome in patients with acute coronary syndrome (ACS). METHODS: A prospective multicentre observational study (AMIS Plus Registry) from 69 Swiss hospitals with 29 620 ACS patients enrolled from 2002 to 2012. The main outcome measures were in-hospital and 1-year follow-up mortality. RESULTS: Of the patients, 27% were female (age 72.1 ± 12.6 years) and 73% were male (64.2 ± 12.9 years). 46.8% had comorbidities and they were less likely to receive guideline-recommended drug therapy and reperfusion. Heart failure (adjusted OR 1.88; 95% CI 1.57 to 2.25), metastatic tumours (OR 2.25; 95% CI 1.60 to 3.19), renal diseases (OR 1.84; 95% CI 1.60 to 2.11) and diabetes (OR 1.35; 95% CI 1.19 to 1.54) were strong predictors of in-hospital mortality. In this population, CCI weighted the history of prior myocardial infarction higher (1 instead of -0.4, 95% CI -1.2 to 0.3 points) but heart failure (1 instead of 3.7, 95% CI 2.6 to 4.7) and renal disease (2 instead of 3.5, 95% CI 2.7 to 4.4) lower than the benchmark, where all comorbidities, age and gender were used as predictors. However, the model with CCI and age has an identical discrimination to this benchmark (areas under the receiver operating characteristic curves were both 0.76). CONCLUSIONS: Comorbidities greatly influenced clinical presentation, therapies received and the outcome of patients admitted with ACS. Heart failure, diabetes, renal disease or metastatic tumours had a major impact on mortality. CCI seems to be an appropriate prognostic indicator for in-hospital and 1-year outcomes in ACS patients. ClinicalTrials.gov Identifier: NCT01305785.
Resumo:
General Summary Although the chapters of this thesis address a variety of issues, the principal aim is common: test economic ideas in an international economic context. The intention has been to supply empirical findings using the largest suitable data sets and making use of the most appropriate empirical techniques. This thesis can roughly be divided into two parts: the first one, corresponding to the first two chapters, investigates the link between trade and the environment, the second one, the last three chapters, is related to economic geography issues. Environmental problems are omnipresent in the daily press nowadays and one of the arguments put forward is that globalisation causes severe environmental problems through the reallocation of investments and production to countries with less stringent environmental regulations. A measure of the amplitude of this undesirable effect is provided in the first part. The third and the fourth chapters explore the productivity effects of agglomeration. The computed spillover effects between different sectors indicate how cluster-formation might be productivity enhancing. The last chapter is not about how to better understand the world but how to measure it and it was just a great pleasure to work on it. "The Economist" writes every week about the impressive population and economic growth observed in China and India, and everybody agrees that the world's center of gravity has shifted. But by how much and how fast did it shift? An answer is given in the last part, which proposes a global measure for the location of world production and allows to visualize our results in Google Earth. A short summary of each of the five chapters is provided below. The first chapter, entitled "Unraveling the World-Wide Pollution-Haven Effect" investigates the relative strength of the pollution haven effect (PH, comparative advantage in dirty products due to differences in environmental regulation) and the factor endowment effect (FE, comparative advantage in dirty, capital intensive products due to differences in endowments). We compute the pollution content of imports using the IPPS coefficients (for three pollutants, namely biological oxygen demand, sulphur dioxide and toxic pollution intensity for all manufacturing sectors) provided by the World Bank and use a gravity-type framework to isolate the two above mentioned effects. Our study covers 48 countries that can be classified into 29 Southern and 19 Northern countries and uses the lead content of gasoline as proxy for environmental stringency. For North-South trade we find significant PH and FE effects going in the expected, opposite directions and being of similar magnitude. However, when looking at world trade, the effects become very small because of the high North-North trade share, where we have no a priori expectations about the signs of these effects. Therefore popular fears about the trade effects of differences in environmental regulations might by exaggerated. The second chapter is entitled "Is trade bad for the Environment? Decomposing worldwide SO2 emissions, 1990-2000". First we construct a novel and large database containing reasonable estimates of SO2 emission intensities per unit labor that vary across countries, periods and manufacturing sectors. Then we use these original data (covering 31 developed and 31 developing countries) to decompose the worldwide SO2 emissions into the three well known dynamic effects (scale, technique and composition effect). We find that the positive scale (+9,5%) and the negative technique (-12.5%) effect are the main driving forces of emission changes. Composition effects between countries and sectors are smaller, both negative and of similar magnitude (-3.5% each). Given that trade matters via the composition effects this means that trade reduces total emissions. We next construct, in a first experiment, a hypothetical world where no trade happens, i.e. each country produces its imports at home and does no longer produce its exports. The difference between the actual and this no-trade world allows us (under the omission of price effects) to compute a static first-order trade effect. The latter now increases total world emissions because it allows, on average, dirty countries to specialize in dirty products. However, this effect is smaller (3.5%) in 2000 than in 1990 (10%), in line with the negative dynamic composition effect identified in the previous exercise. We then propose a second experiment, comparing effective emissions with the maximum or minimum possible level of SO2 emissions. These hypothetical levels of emissions are obtained by reallocating labour accordingly across sectors within each country (under the country-employment and the world industry-production constraints). Using linear programming techniques, we show that emissions are reduced by 90% with respect to the worst case, but that they could still be reduced further by another 80% if emissions were to be minimized. The findings from this chapter go together with those from chapter one in the sense that trade-induced composition effect do not seem to be the main source of pollution, at least in the recent past. Going now to the economic geography part of this thesis, the third chapter, entitled "A Dynamic Model with Sectoral Agglomeration Effects" consists of a short note that derives the theoretical model estimated in the fourth chapter. The derivation is directly based on the multi-regional framework by Ciccone (2002) but extends it in order to include sectoral disaggregation and a temporal dimension. This allows us formally to write present productivity as a function of past productivity and other contemporaneous and past control variables. The fourth chapter entitled "Sectoral Agglomeration Effects in a Panel of European Regions" takes the final equation derived in chapter three to the data. We investigate the empirical link between density and labour productivity based on regional data (245 NUTS-2 regions over the period 1980-2003). Using dynamic panel techniques allows us to control for the possible endogeneity of density and for region specific effects. We find a positive long run elasticity of density with respect to labour productivity of about 13%. When using data at the sectoral level it seems that positive cross-sector and negative own-sector externalities are present in manufacturing while financial services display strong positive own-sector effects. The fifth and last chapter entitled "Is the World's Economic Center of Gravity Already in Asia?" computes the world economic, demographic and geographic center of gravity for 1975-2004 and compares them. Based on data for the largest cities in the world and using the physical concept of center of mass, we find that the world's economic center of gravity is still located in Europe, even though there is a clear shift towards Asia. To sum up, this thesis makes three main contributions. First, it provides new estimates of orders of magnitudes for the role of trade in the globalisation and environment debate. Second, it computes reliable and disaggregated elasticities for the effect of density on labour productivity in European regions. Third, it allows us, in a geometrically rigorous way, to track the path of the world's economic center of gravity.
Resumo:
We consider a Potts model diluted by fully frustrated Ising spins. The model corresponds to a fully frustrated Potts model with variables having an integer absolute value and a sign. This model presents precursor phenomena of a glass transition in the high-temperature region. We show that the onset of these phenomena can be related to a thermodynamic transition. Furthermore, this transition can be mapped onto a percolation transition. We numerically study the phase diagram in two dimensions (2D) for this model with frustration and without disorder and we compare it to the phase diagram of (i) the model with frustration and disorder and (ii) the ferromagnetic model. Introducing a parameter that connects the three models, we generalize the exact expression of the ferromagnetic Potts transition temperature in 2D to the other cases. Finally, we estimate the dynamic critical exponents related to the Potts order parameter and to the energy.
Resumo:
All derivations of the one-dimensional telegraphers equation, based on the persistent random walk model, assume a constant speed of signal propagation. We generalize here the model to allow for a variable propagation speed and study several limiting cases in detail. We also show the connections of this model with anomalous diffusion behavior and with inertial dichotomous processes.
Resumo:
BACKGROUND: Mantle cell lymphoma accounts for 6% of all B-cell lymphomas and is generally incurable. It is characterized by the translocation t(11;14) leading to cyclin D1 over-expression. Cyclin D1 is downstream of the mammalian target of rapamycin threonine kinase and can be effectively blocked by mammalian target of rapamycin inhibitors. We set out to examine the single agent activity of the orally available mammalian target of rapamycin inhibitor everolimus in a prospective, multicenter trial in patients with relapsed or refractory mantle cell lymphoma (NCT00516412). DESIGN AND METHODS: Eligible patients who had received a maximum of three prior lines of chemotherapy were given everolimus 10 mg for 28 days (one cycle) for a total of six cycles or until disease progression. The primary endpoint was the best objective response. Adverse reactions, progression-free survival and molecular response were secondary endpoints. RESULTS: Thirty-six patients (35 evaluable) were enrolled and treatment was generally well tolerated with Common Terminology Criteria grade ≥ 3 adverse events (>5%) including anemia (11%), thrombocytopenia (11%) and neutropenia (8%). The overall response rate was 20% (95% CI: 8-37%) with two complete remissions and five partial responses; 49% of the patients had stable disease. At a median follow-up of 6 months, the median progression-free survival was 5.5 months (95% CI: 2.8-8.2) overall and 17.0 (6.4-23.3) months for 18 patients who received six or more cycles of treatment. Three patients achieved a lasting complete molecular response, as assessed by polymerase chain reaction analysis of peripheral blood. CONCLUSIONS: Everolimus as a single agent is well tolerated and has anti-lymphoma activity in relapsed or refractory mantle cell lymphoma. Further studies of everolimus in combination with chemotherapy or as a single agent for maintenance treatment are warranted.
Resumo:
PURPOSE: The objective of this experiment is to establish a continuous postmortem circulation in the vascular system of porcine lungs and to evaluate the pulmonary distribution of the perfusate. This research is performed in the bigger scope of a revascularization project of Thiel embalmed specimens. This technique enables teaching anatomy, practicing surgical procedures and doing research under lifelike circumstances. METHODS: After cannulation of the pulmonary trunk and the left atrium, the vascular system was flushed with paraffinum perliquidum (PP) through a heart-lung machine. A continuous circulation was then established using red PP, during which perfusion parameters were measured. The distribution of contrast-containing PP in the pulmonary circulation was visualized on computed tomography. Finally, the amount of leak from the vascular system was calculated. RESULTS: A reperfusion of the vascular system was initiated for 37 min. The flow rate ranged between 80 and 130 ml/min throughout the experiment with acceptable perfusion pressures (range: 37-78 mm Hg). Computed tomography imaging and 3D reconstruction revealed a diffuse vascular distribution of PP and a decreasing vascularization ratio in cranial direction. A self-limiting leak (i.e. 66.8% of the circulating volume) towards the tracheobronchial tree due to vessel rupture was also measured. CONCLUSIONS: PP enables circulation in an isolated porcine lung model with an acceptable pressure-flow relationship resulting in an excellent recruitment of the vascular system. Despite these promising results, rupture of vessel walls may cause leaks. Further exploration of the perfusion capacities of PP in other organs is necessary. Eventually, this could lead to the development of reperfused Thiel embalmed human bodies, which have several applications.
Resumo:
We study the static properties of the Little model with asymmetric couplings. We show that the thermodynamics of this model coincides with that of the Sherrington-Kirkpatrick model, and we compute the main finite-size corrections to the difference of the free energy between these two models and to some clarifying order parameters. Our results agree with numerical simulations. Numerical results are presented for the symmetric Little model, which show that the same conclusions are also valid in this case.
Resumo:
[cat] Com afecten l’obertura comercial i financera a la volatilitat macroeconòmica? La literatura existent, tant empírica com teòrica, no ha assolit encara un consens. Aquest article usa un model microfonamentat de dos països simètrics amb entrada endògena d’empreses per estudiar-ho. L’anàlisis es du a terme per tres règims econòmics diferents amb diferents nivells d’integració internacional: una economia tancada, una autarquia financera i una integració plena. Es consideren diversos nivells d’obertura comercial, en forma de biaix domèstic de la demanda i l’economia pot patir pertorbacions en la productivitat del treball i en innovació. El model conclou que la incertesa macroeconòmica, representada principalment per la volatilitat del consum, la producció i la relació real d’intercanvi internacional, depèn del grau d’obertura i del tipus de pertorbació.
Resumo:
This paper analyzes the issue of the interiority of the optimal population growth rate in a two-period overlapping generations model with endogenous fertility. Using Cobb-Douglas utility and production functions, we show that the introduction of a cost of raising children allows for the possibility of the existence of an interior global maximum in the planner¿s problem, contrary to the exogenous fertility case
Resumo:
This article introduces the Dyadic Coping Inventory (DCI; Bodenmann, 2008) and aims (1) to investigate the reliability and aspects of the validity of the Italian and French versions of the DCI, and (2) to replicate its factor structure and reliabilities using a new Swiss German sample. Based on 216 German-, 378 Italian-, and 198 French-speaking participants, the factor structure of the original German inventory was able to be replicated by using principal components analysis in all three groups after excluding two items in the Italian and French versions. The latter were shown to be as reliable as the German version with the exception of the low reliabilities of negative dyadic coping in the French group. Confirmatory factor analyses provided additional support for delegated dyadic coping and evaluation of dyadic coping. Intercorrelations among scales were similar across all three languages groups with a few exceptions. Previous findings could be replicated in all three groups, showing that aspects of dyadic coping were more strongly related to marital quality than to dyadic communication. The use of the dyadic coping scales in the actor-partner interdependence model, the common fate model, and the mutual influence model is discussed.
Resumo:
AIM: Specific factors responsible for interindividual variability should be identified and their contribution quantified to improve the usefulness of biological monitoring. Among others, age is an easily identifiable determinant, which could play an important impact on biological variability. MATERIALS AND METHODS: A compartmental toxicokinetic model developed in previous studies for a series of metallic and organic compounds was applied to the description of age differences. Young male physiological and metabolic parameters, based on Reference Man information, were taken from preceding studies and were modified to take into account age based on available information about age differences. RESULTS: Numerical simulation using the kinetic model with the modified parameters indicates in some cases important differences due to age. The expected changes are mostly of the order of 10-20%, but differences up to 50% were observed in some cases. CONCLUSION: These differences appear to depend on the chemical and on the biological entity considered. Further work should be done to improve our estimates of these parameters, by considering for example uncertainty and variability in these parameters. [Authors]