996 resultados para yield simulation
Resumo:
An affine asset pricing model in which traders have rational but heterogeneous expectations aboutfuture asset prices is developed. We use the framework to analyze the term structure of interestrates and to perform a novel three-way decomposition of bond yields into (i) average expectationsabout short rates (ii) common risk premia and (iii) a speculative component due to heterogeneousexpectations about the resale value of a bond. The speculative term is orthogonal to public informationin real time and therefore statistically distinct from common risk premia. Empirically wefind that the speculative component is quantitatively important accounting for up to a percentagepoint of yields, even in the low yield environment of the last decade. Furthermore, allowing for aspeculative component in bond yields results in estimates of historical risk premia that are morevolatile than suggested by standard Affine Gaussian term structure models which our frameworknests.
Resumo:
Many researchers have suggested simulation as a powerful tool to transpose the normal classroom into an authentic setting where language skills can be performed under more realistic conditions. This paper will outline the benefits of simulation in the classroom, provide additional topics to Third Cycle English Language National Syllabus to be discussed / simulated in the classroom and also provide two simulation lesson plans with samples for Capeverdean Third Cycle English Language Students.
Resumo:
We present a novel numerical approach for the comprehensive, flexible, and accurate simulation of poro-elastic wave propagation in 2D polar coordinates. An important application of this method and its extensions will be the modeling of complex seismic wave phenomena in fluid-filled boreholes, which represents a major, and as of yet largely unresolved, computational problem in exploration geophysics. In view of this, we consider a numerical mesh, which can be arbitrarily heterogeneous, consisting of two or more concentric rings representing the fluid in the center and the surrounding porous medium. The spatial discretization is based on a Chebyshev expansion in the radial direction and a Fourier expansion in the azimuthal direction and a Runge-Kutta integration scheme for the time evolution. A domain decomposition method is used to match the fluid-solid boundary conditions based on the method of characteristics. This multi-domain approach allows for significant reductions of the number of grid points in the azimuthal direction for the inner grid domain and thus for corresponding increases of the time step and enhancements of computational efficiency. The viability and accuracy of the proposed method has been rigorously tested and verified through comparisons with analytical solutions as well as with the results obtained with a corresponding, previously published, and independently bench-marked solution for 2D Cartesian coordinates. Finally, the proposed numerical solution also satisfies the reciprocity theorem, which indicates that the inherent singularity associated with the origin of the polar coordinate system is adequately handled.
Resumo:
BACKGROUND: The quality of colon cleansing is a major determinant of quality of colonoscopy. To our knowledge, the impact of bowel preparation on the quality of colonoscopy has not been assessed prospectively in a large multicenter study. Therefore, this study assessed the factors that determine colon-cleansing quality and the impact of cleansing quality on the technical performance and diagnostic yield of colonoscopy. METHODS: Twenty-one centers from 11 countries participated in this prospective observational study. Colon-cleansing quality was assessed on a 5-point scale and was categorized on 3 levels. The clinical indication for colonoscopy, diagnoses, and technical parameters related to colonoscopy were recorded. RESULTS: A total of 5832 patients were included in the study (48.7% men, mean age 57.6 [15.9] years). Cleansing quality was lower in elderly patients and in patients in the hospital. Procedures in poorly prepared patients were longer, more difficult, and more often incomplete. The detection of polyps of any size depended on cleansing quality: odds ratio (OR) 1.73: 95% confidence interval (CI)[1.28, 2.36] for intermediate-quality compared with low-quality preparation; and OR 1.46: 95% CI[1.11, 1.93] for high-quality compared with low-quality preparation. For polyps >10 mm in size, corresponding ORs were 1.0 for low-quality cleansing, OR 1.83: 95% CI[1.11, 3.05] for intermediate-quality cleansing, and OR 1.72: 95% CI[1.11, 2.67] for high-quality cleansing. Cancers were not detected less frequently in the case of poor preparation. CONCLUSIONS: Cleansing quality critically determines quality, difficulty, speed, and completeness of colonoscopy, and is lower in hospitalized patients and patients with higher levels of comorbid conditions. The proportion of patients who undergo polypectomy increases with higher cleansing quality, whereas colon cancer detection does not seem to critically depend on the quality of bowel preparation.
Resumo:
Nowadays, genome-wide association studies (GWAS) and genomic selection (GS) methods which use genome-wide marker data for phenotype prediction are of much potential interest in plant breeding. However, to our knowledge, no studies have been performed yet on the predictive ability of these methods for structured traits when using training populations with high levels of genetic diversity. Such an example of a highly heterozygous, perennial species is grapevine. The present study compares the accuracy of models based on GWAS or GS alone, or in combination, for predicting simple or complex traits, linked or not with population structure. In order to explore the relevance of these methods in this context, we performed simulations using approx 90,000 SNPs on a population of 3,000 individuals structured into three groups and corresponding to published diversity grapevine data. To estimate the parameters of the prediction models, we defined four training populations of 1,000 individuals, corresponding to these three groups and a core collection. Finally, to estimate the accuracy of the models, we also simulated four breeding populations of 200 individuals. Although prediction accuracy was low when breeding populations were too distant from the training populations, high accuracy levels were obtained using the sole core-collection as training population. The highest prediction accuracy was obtained (up to 0.9) using the combined GWAS-GS model. We thus recommend using the combined prediction model and a core-collection as training population for grapevine breeding or for other important economic crops with the same characteristics.
Resumo:
We develop a general error analysis framework for the Monte Carlo simulationof densities for functionals in Wiener space. We also study variancereduction methods with the help of Malliavin derivatives. For this, wegive some general heuristic principles which are applied to diffusionprocesses. A comparison with kernel density estimates is made.
Resumo:
Age data frequently display excess frequencies at round or attractive ages, such as even numbers and multiples of five. This phenomenon of age heaping has been viewed as a problem in previous research, especially in demography and epidemiology. We see it as an opportunity and propose its use as a measure of human capital that can yield comparable estimates across a wide range of historical contexts. A simulation study yields methodological guidelines for measuring and interpreting differences in ageheaping, while analysis of contemporary and historical datasets demonstrates the existence of a robust correlation between age heaping and literacy at both the individual and aggregate level. To illustrate the method, we generate estimates of human capital in Europe over the very long run, which support the hypothesis of a major increase in human capital preceding the industrial revolution.
Resumo:
This paper describes a simulation package designed to estimate the annual income taxes paid by respondents of the Swiss Household Panel (SHP). In Switzerland, the 26 cantons have their own tax system. Additionally, tax levels vary between the over 2000 municipalities and over time. The simulation package takes account of this complexity by building on existing tables on tax levels which are provided by the Swiss Federal Tax Administration Office. Because these are limited to a few types of households and only 812 municipalities, they have to be extended to cover all households and municipalities. A further drawback of these tables is that they neglect several deductions. The tax simulation package fills this gap by taking additionally account of deductions for children, double-earner couples, third pillar and support for dependent persons according to cantonal legislation. The resulting variable on direct taxes not only serves to calculate household income net of taxes, but can also be a variable for analysis by its own account.
Resumo:
The discovery of genes implicated in familial forms of Parkinson's disease (PD) has provided new insights into the molecular events leading to neurodegeneration. Clinically, patients with genetically determined PD can be difficult to distinguish from those with sporadic PD. Monogenic causes include autosomal dominantly (SNCA, LRRK2, VPS35, EIF4G1) as well as recessively (PARK2, PINK1, DJ-1) inherited mutations. Additional recessive forms of parkinsonism present with atypical signs, including very early disease onset, dystonia, dementia and pyramidal signs. New techniques in the search for phenotype-associated genes (next-generation sequencing, genome-wide association studies) have expanded the spectrum of both monogenic PD and variants that alter risk to develop PD. Examples of risk genes include the two lysosomal enzyme coding genes GBA and SMPD1, which are associated with a 5-fold and 9-fold increased risk of PD, respectively. It is hoped that further knowledge of the genetic makeup of PD will allow designing treatments that alter the course of the disease.
Resumo:
Producers continually strive for high yielding soybeans. The state-wide average yield for Iowa is now more than 50 bu./acre. The “yield plateau” reported by many producers does not exist, and is a perception largely brought on by misuse of an oversimplified management system. High yielding soybeans are achieved through improved and targeted management decisions. Improved agronomic decisions for soybeans are critical since soybean is very sensitive to stresses that influence soybean growth, development and yield.
Resumo:
The computer code system PENELOPE (version 2008) performs Monte Carlo simulation of coupledelectron-photon transport in arbitrary materials for a wide energy range, from a few hundred eV toabout 1 GeV. Photon transport is simulated by means of the standard, detailed simulation scheme.Electron and positron histories are generated on the basis of a mixed procedure, which combinesdetailed simulation of hard events with condensed simulation of soft interactions. A geometry packagecalled PENGEOM permits the generation of random electron-photon showers in material systemsconsisting of homogeneous bodies limited by quadric surfaces, i.e., planes, spheres, cylinders, etc. Thisreport is intended not only to serve as a manual of the PENELOPE code system, but also to provide theuser with the necessary information to understand the details of the Monte Carlo algorithm.
Resumo:
We perform direct numerical simulations of drainage by solving Navier- Stokes equations in the pore space and employing the Volume Of Fluid (VOF) method to track the evolution of the fluid-fluid interface. After demonstrating that the method is able to deal with large viscosity contrasts and to model the transition from stable flow to viscous fingering, we focus on the definition of macroscopic capillary pressure. When the fluids are at rest, the difference between inlet and outlet pressures and the difference between the intrinsic phase average pressure coincide with the capillary pressure. However, when the fluids are in motion these quantities are dominated by viscous forces. In this case, only a definition based on the variation of the interfacial energy provides an accurate measure of the macroscopic capillary pressure and allows separating the viscous from the capillary pressure components.
Resumo:
Background: EEG is the cornerstone of epilepsy diagnostics and mandatory to determine the underlying epilepsy syndrome (e.g. focal vs idiopathic generalized). However, its potential as imaging tool is still underrecognized. In the present study, we aim to determine the prerequisites of maximal benefit of electric source imaging (ESI) to localize the irritative zone in patients with focal epilepsy. Methods: 150 patients suffering from focal epilepsy and with minimum 1 year post-operative follow-up were studied prospectively by reviewers blinded to the underlying diagnosis and outcome. We evaluated the influence of two important factors on sensitivity and specificity of ESI: the number of electrodes (low resolution, LR-ESI: \30 vs. high resolution, HR-ESI: 128-256 electrodes), and the use of individual MRI (i-MRI) vs. template MRI (t-MRI) as head model.Results: ESI had a sensitivity of 85% and a specificity of 87% when HR-ESI with i-MRI was used. Using LR-ESI, sensitivity decreased to 68%, or even 57% when only t-MRI was available. The sensitivity of HR-ESI/i-MRI compared favorably with those of MRI (76%), PET (69%) and ictal/interictal SPECT (64%).Interpretation: This study on a large patient group shows excellent sensitivity and specificity of ESI if 128 EEG channels or more are used for ESI and if the results are co-registered to the patient's individual MRI. Localization precision is as high as or even higher than established brain imaging techniques, providing excellent costeffectiveness in epilepsy evaluation. HR-ESI appears to be a valuable additional imaging tool, given that larger electrode arrays are easily and rapidly applied with modern EEG equipment and that structural MRI is nearly always available for these patients.