927 resultados para Poisson model with common shocks


Relevância:

100.00% 100.00%

Publicador:

Resumo:

We develop and quantitatively implement a dynamic general equilibrium model with labor market matching and endogenous deterllÚnation of the job destruction rate. The mo deI produces a elose match with data on job creation and destruction. Cyelical fluctuations in the job destruction rate serve to magnify the effects of productivity shocks on output; as well as making the effects much more persistent. Interactions between the labor and capital markets, mediated by the rental rate of capital, play the central role in propagating shocks.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper presents a small open economy model with capital accumulation and without commitment to repay debt. The optimal debt contract specifies debt relief following bad shocks and debt increase following good shocks and brings first order benefits if the country's borrowing constraint is binding. Countries with less capital (with higher marginal productivity of capital) have a higher debt-GDP ratio, are more likely to default on uncontingent bonds, require higher debt relief after bad shocks and pay a higher spread over treasury. Debt relief prescribed by the optimal contract following the interest rate hikes of 1980-81 is more than half of the debt forgiveness obtained by the main Latin American countries through the Brady agreements.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The main properties of realistic models for manganites are studied using analytic mean-field approximations and computational numerical methods, focusing on the two-orbital model with electrons interacting through Jahn-Teller (JT) phonons and/or Coulombic repulsions. Analyzing the model including both interactions by the combination of the mean-field approximation and the exact diagonalization method, it is argued that the spin-charge-orbital structure in the insulating phase of the purely JT-phononic model with a large Hund couphng J(H) is not qualitatively changed by the inclusion of the Coulomb interactions. As an important application of the present mean-held approximation, the CE-type antiferromagnetic state, the charge-stacked structure along the z axis, and (3x(2) - r(2))/(3y(2) - r(2))-type orbital ordering are successfully reproduced based on the JT-phononic model with large JH for the half-doped manganite, in agreement with recent Monte Carlo simulation results. Topological arguments and the relevance of the Heisenberg exchange among localized t(2g) spins explains why the inclusion of the nearest-neighbor Coulomb interaction does not destroy the charge stacking structure. It is also verified that the phase-separation tendency is observed both in purely JT-phononic (large JH) and purely Coulombic models in the vicinity of the hole undoped region, as long as realistic hopping matrices are used. This highlights the qualitative similarities of both approaches and the relevance of mixed-phase tendencies in the context of manganites. In addition, the rich and complex phase diagram of the two-orbital Coulombic model in one dimension is presented. Our results provide robust evidence that Coulombic and JT-phononic approaches to manganites are not qualitatively different ways to carry out theoretical calculations, but they share a variety of common features.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We analyse the production of multileptons in the simplest supergravity model with bilinear violation of R parity at the Fermilab Tevatron. Despite the small .R-parity violating couplings needed to generate the neutrino masses indicated by current atmospheric neutrino data, the lightest supersymmetric particle is unstable and can decay inside the detector. This leads to a phenomenology quite distinct from that of the R-parity conserving scenario. We quantify by how much the supersymmetric multilepton signals differ from the R-parity conserving expectations, displaying our results in the m0 ⊙ m1/2 plane. We show that the presence of bilinear R-parity violating interactions enhances the supersymmetric multilepton signals over most of the parameter space, specially at moderate and large m0. © SISSA/ISAS 2003.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The Caribbean region remains highly vulnerable to the impacts of climate change. In order to assess the social and economic consequences of climate change for the region, the Economic Commission for Latin America and the Caribbean( ECLAC) has developed a model for this purpose. The model is referred to as the Climate Impact Assessment Model (ECLAC-CIAM) and is a tool that can simultaneously assess multiple sectoral climate impacts specific to the Caribbean as a whole and for individual countries. To achieve this goal, an Integrated Assessment Model (IAM) with a Computable General Equilibrium Core was developed comprising of three modules to be executed sequentially. The first of these modules defines the type and magnitude of economic shocks on the basis of a climate change scenario, the second module is a global Computable General Equilibrium model with a special regional and industrial classification and the third module processes the output of the CGE model to get more disaggregated results. The model has the potential to produce several economic estimates but the current default results include percentage change in real national income for individual Caribbean states which provides a simple measure of welfare impacts. With some modifications, the model can also be used to consider the effects of single sectoral shocks such as (Land, Labour, Capital and Tourism) on the percentage change in real national income. Ultimately, the model is envisioned as an evolving tool for assessing the impact of climate change in the Caribbean and as a guide to policy responses with respect to adaptation strategies.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

PURPOSE. To evaluate electrically evoked phosphene thresholds (EPTs) in healthy subjects and in patients with retinal disease and to assess repeatability and possible correlations with common ophthalmologic tests. METHODS. In all, 117 individuals participated: healthy subjects (n = 20) and patients with retinitis pigmentosa (RP, n = 30), Stargardt's disease (STG, n = 14), retinal artery occlusion (RAO, n = 20), nonarteritic anterior ischemic optic neuropathy (NAION, n = 16), and primary open-angle glaucoma (POAG, n = 17). EPTs were determined at 3, 6, 9, 20, 40, 60, and 80 Hz with 5+5-ms biphasic current pulses using DTL electrodes. Subjects were examined twice (test-retest range: 1-6 weeks). An empirical model was developed to describe the current-frequency relationship of EPTs. Visual acuity, visual field (kinetic + static), electrophysiology (RP, RAO, STG: Ganzfeld-electroretinography [ERG]/multifocal-ERG; POAG: pattern-ERG; NAION: VEP), slit-lamp biomicroscopy, fundus examination, and tonometry were assessed. RESULTS. EPTs varied between disease groups (20 Hz: healthy subjects: 0.062 +/- 0.038 mA; STG: 0.102 +/- 0.097 mA; POAG: 0.127 +/- 0.09 mA; NAION: 0.244 +/- 0.126 mA; RP: 0.371 +/- 0.223 mA; RAO: 0.988 +/- 1.142 mA). In all groups EPTs were lowest at 20 Hz. In patients with retinal diseases and across all frequencies EPTs were significantly higher than those in healthy subjects, except in STG at 20 Hz (P = 0.09) and 40 Hz (P = 0.17). Test-retest difference at 20 Hz was 0.006 mA in the healthy group and 0.003-0.04 mA in disease groups. CONCLUSIONS. Considering the fast, safe, and reliable practicability of EPT testing, this test might be used more often under clinical circumstances. Determination of EPTs could be potentially useful in elucidation of the progress of ophthalmologic diseases, either in addition to standard clinical assessment or under conditions in which these standard tests cannot be used meaningfully. (ClinicalTrials.gov number, NCT00804102.) (Invest Ophthalmol Vis Sci. 2012; 53: 7440-7448) DOI:10.1167/iovs.12-9612

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A detailed numerical simulation of ethanol turbulent spray combustion on a rounded jet flame is pre- sented in this article. The focus is to propose a robust mathematical model with relatively low complexity sub- models to reproduce the main characteristics of the cou- pling between both phases, such as the turbulence modulation, turbulent droplets dissipation, and evaporative cooling effect. A RANS turbulent model is implemented. Special features of the model include an Eulerian– Lagrangian procedure under a fully two-way coupling and a modified flame sheet model with a joint mixture fraction– enthalpy b -PDF. Reasonable agreement between measured and computed mean profiles of temperature of the gas phase and droplet size distributions is achieved. Deviations found between measured and predicted mean velocity profiles are attributed to the turbulent combustion modeling adopted

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The ability to represent the transport and fate of an oil slick at the sea surface is a formidable task. By using an accurate numerical representation of oil evolution and movement in seawater, the possibility to asses and reduce the oil-spill pollution risk can be greatly improved. The blowing of the wind on the sea surface generates ocean waves, which give rise to transport of pollutants by wave-induced velocities that are known as Stokes’ Drift velocities. The Stokes’ Drift transport associated to a random gravity wave field is a function of the wave Energy Spectra that statistically fully describe it and that can be provided by a wave numerical model. Therefore, in order to perform an accurate numerical simulation of the oil motion in seawater, a coupling of the oil-spill model with a wave forecasting model is needed. In this Thesis work, the coupling of the MEDSLIK-II oil-spill numerical model with the SWAN wind-wave numerical model has been performed and tested. In order to improve the knowledge of the wind-wave model and its numerical performances, a preliminary sensitivity study to different SWAN model configuration has been carried out. The SWAN model results have been compared with the ISPRA directional buoys located at Venezia, Ancona and Monopoli and the best model settings have been detected. Then, high resolution currents provided by a relocatable model (SURF) have been used to force both the wave and the oil-spill models and its coupling with the SWAN model has been tested. The trajectories of four drifters have been simulated by using JONSWAP parametric spectra or SWAN directional-frequency energy output spectra and results have been compared with the real paths traveled by the drifters.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

BACKGROUND: Loss-of-function mutations in SCN5A, the gene encoding Na(v)1.5 Na+ channel, are associated with inherited cardiac conduction defects and Brugada syndrome, which both exhibit variable phenotypic penetrance of conduction defects. We investigated the mechanisms of this heterogeneity in a mouse model with heterozygous targeted disruption of Scn5a (Scn5a(+/-) mice) and compared our results to those obtained in patients with loss-of-function mutations in SCN5A. METHODOLOGY/PRINCIPAL FINDINGS: Based on ECG, 10-week-old Scn5a(+/-) mice were divided into 2 subgroups, one displaying severe ventricular conduction defects (QRS interval>18 ms) and one a mild phenotype (QRS< or = 18 ms; QRS in wild-type littermates: 10-18 ms). Phenotypic difference persisted with aging. At 10 weeks, the Na+ channel blocker ajmaline prolonged QRS interval similarly in both groups of Scn5a(+/-) mice. In contrast, in old mice (>53 weeks), ajmaline effect was larger in the severely affected subgroup. These data matched the clinical observations on patients with SCN5A loss-of-function mutations with either severe or mild conduction defects. Ventricular tachycardia developed in 5/10 old severely affected Scn5a(+/-) mice but not in mildly affected ones. Correspondingly, symptomatic SCN5A-mutated Brugada patients had more severe conduction defects than asymptomatic patients. Old severely affected Scn5a(+/-) mice but not mildly affected ones showed extensive cardiac fibrosis. Mildly affected Scn5a(+/-) mice had similar Na(v)1.5 mRNA but higher Na(v)1.5 protein expression, and moderately larger I(Na) current than severely affected Scn5a(+/-) mice. As a consequence, action potential upstroke velocity was more decreased in severely affected Scn5a(+/-) mice than in mildly affected ones. CONCLUSIONS: Scn5a(+/-) mice show similar phenotypic heterogeneity as SCN5A-mutated patients. In Scn5a(+/-) mice, phenotype severity correlates with wild-type Na(v)1.5 protein expression.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Primate multisensory object perception involves distributed brain regions. To investigate the network character of these regions of the human brain, we applied data-driven group spatial independent component analysis (ICA) to a functional magnetic resonance imaging (fMRI) data set acquired during a passive audio-visual (AV) experiment with common object stimuli. We labeled three group-level independent component (IC) maps as auditory (A), visual (V), and AV, based on their spatial layouts and activation time courses. The overlap between these IC maps served as definition of a distributed network of multisensory candidate regions including superior temporal, ventral occipito-temporal, posterior parietal and prefrontal regions. During an independent second fMRI experiment, we explicitly tested their involvement in AV integration. Activations in nine out of these twelve regions met the max-criterion (A < AV > V) for multisensory integration. Comparison of this approach with a general linear model-based region-of-interest definition revealed its complementary value for multisensory neuroimaging. In conclusion, we estimated functional networks of uni- and multisensory functional connectivity from one dataset and validated their functional roles in an independent dataset. These findings demonstrate the particular value of ICA for multisensory neuroimaging research and using independent datasets to test hypotheses generated from a data-driven analysis.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper proposes Poisson log-linear multilevel models to investigate population variability in sleep state transition rates. We specifically propose a Bayesian Poisson regression model that is more flexible, scalable to larger studies, and easily fit than other attempts in the literature. We further use hierarchical random effects to account for pairings of individuals and repeated measures within those individuals, as comparing diseased to non-diseased subjects while minimizing bias is of epidemiologic importance. We estimate essentially non-parametric piecewise constant hazards and smooth them, and allow for time varying covariates and segment of the night comparisons. The Bayesian Poisson regression is justified through a re-derivation of a classical algebraic likelihood equivalence of Poisson regression with a log(time) offset and survival regression assuming piecewise constant hazards. This relationship allows us to synthesize two methods currently used to analyze sleep transition phenomena: stratified multi-state proportional hazards models and log-linear models with GEE for transition counts. An example data set from the Sleep Heart Health Study is analyzed.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

PURPOSE: To compare two techniques used to create a larger animal model of venous valve incompetence. MATERIALS AND METHODS: To achieve vein dilatation as the primary cause of valve incompetence, common carotid jugular vein (JV) fistulas were created and optional filters were placed into the JV of sheep. Altogether, nine inferior vena cava filters were placed in three sheep in two stages. Six filters were placed caudal to the most caudal JV valve in three sheep and removed 6 weeks later. Then, three filters were placed across the most caudal valve in two sheep with competent valves and removed 3 weeks later. A common carotid artery-JV fistula was created in three sheep and followed-up for 1-3 weeks. Ascending and descending venograms were obtained to determine the JV sizes and function of their valves. The JVs removed at necropsy were studied with venoscopy. RESULTS: Only one of the six JVs with filters caudal to the most caudal valve had incompetent valves after filter removal at 6 weeks. In addition, only one of three JVs with the filter across the valve had incompetent valves after filter removal at 3 weeks. At 1-3-week follow-up of the group with common carotid artery-JV fistula, all three JVs had incompetent valves in the cephalad vein portion, but only one JV had an incompetent valve in its caudal portion. At venoscopy, the incompetent valves showed various degrees of damage ranging from shortening to the destruction of valve leaflets. CONCLUSION: Dilation of the valve annulus with a removable vena cava filter failed to produce valve incompetence. The promising results with the common carotid artery-JV fistula justify further detailed research.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In numerous intervention studies and education field trials, random assignment to treatment occurs in clusters rather than at the level of observation. This departure of random assignment of units may be due to logistics, political feasibility, or ecological validity. Data within the same cluster or grouping are often correlated. Application of traditional regression techniques, which assume independence between observations, to clustered data produce consistent parameter estimates. However such estimators are often inefficient as compared to methods which incorporate the clustered nature of the data into the estimation procedure (Neuhaus 1993).1 Multilevel models, also known as random effects or random components models, can be used to account for the clustering of data by estimating higher level, or group, as well as lower level, or individual variation. Designing a study, in which the unit of observation is nested within higher level groupings, requires the determination of sample sizes at each level. This study investigates the design and analysis of various sampling strategies for a 3-level repeated measures design on the parameter estimates when the outcome variable of interest follows a Poisson distribution. ^ Results study suggest that second order PQL estimation produces the least biased estimates in the 3-level multilevel Poisson model followed by first order PQL and then second and first order MQL. The MQL estimates of both fixed and random parameters are generally satisfactory when the level 2 and level 3 variation is less than 0.10. However, as the higher level error variance increases, the MQL estimates become increasingly biased. If convergence of the estimation algorithm is not obtained by PQL procedure and higher level error variance is large, the estimates may be significantly biased. In this case bias correction techniques such as bootstrapping should be considered as an alternative procedure. For larger sample sizes, those structures with 20 or more units sampled at levels with normally distributed random errors produced more stable estimates with less sampling variance than structures with an increased number of level 1 units. For small sample sizes, sampling fewer units at the level with Poisson variation produces less sampling variation, however this criterion is no longer important when sample sizes are large. ^ 1Neuhaus J (1993). “Estimation efficiency and Tests of Covariate Effects with Clustered Binary Data”. Biometrics , 49, 989–996^