878 resultados para Random telegraph noise (RTN)


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Comprend : Sweet bird that shunn'st the noise of folly / Haendel, comp. ; Mme Melba, S ; avec orchestre et flûte ; O lovely night / Ronald, comp. ; Mme Melba, S ; avec orchestre

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this paper we propose a general technique to develop first and second order closed-form approximation formulas for short-time options withrandom strikes. Our method is based on Malliavin calculus techniques andallows us to obtain simple closed-form approximation formulas dependingon the derivative operator. The numerical analysis shows that these formulas are extremely accurate and improve some previous approaches ontwo-assets and three-assets spread options as Kirk's formula or the decomposition mehod presented in Alòs, Eydeland and Laurence (2011).

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this paper I analyze the effects of insider trading on real investmentand the insurance role of financial markets. There is a single entrepreneurwho, at a first stage, chooses the level of investment in a risky business.At the second stage, an asset with random payoff is issued and then the entrepreneurreceives some privileged information on the likely realization of productionreturn. At the third stage, trading occurs on the asset market, where theentrepreneur faces the aggregate demand coming from a continuum of rationaluniformed traders and some noise traders. I compare the equilibrium withinsider trading (when the entrepreneur trades on her inside information in theasset market) with the equilibrium in the same market without insider trading. Ifind that permitting insider trading tends to decrease the level of realinvestment. Moreover, the asset market is thinner and the entrepreneur's netsupply of the asset and the hedge ratio are lower, although the asset priceis more informative and volatile.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper presents 3-D brain tissue classificationschemes using three recent promising energy minimizationmethods for Markov random fields: graph cuts, loopybelief propagation and tree-reweighted message passing.The classification is performed using the well knownfinite Gaussian mixture Markov Random Field model.Results from the above methods are compared with widelyused iterative conditional modes algorithm. Theevaluation is performed on a dataset containing simulatedT1-weighted MR brain volumes with varying noise andintensity non-uniformities. The comparisons are performedin terms of energies as well as based on ground truthsegmentations, using various quantitative metrics.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We study a novel class of noisy rational expectations equilibria in markets with largenumber of agents. We show that, as long as noise increases with the number of agents inthe economy, the limiting competitive equilibrium is well-defined and leads to non-trivialinformation acquisition, perfect information aggregation, and partially revealing prices,even if per capita noise tends to zero. We find that in such equilibrium risk sharing and price revelation play dierent roles than in the standard limiting economy in which per capita noise is not negligible. We apply our model to study information sales by a monopolist, information acquisition in multi-asset markets, and derivatives trading. Thelimiting equilibria are shown to be perfectly competitive, even when a strategic solutionconcept is used.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The aim was to propose a strategy for finding reasonable compromises between image noise and dose as a function of patient weight. Weighted CT dose index (CTDI(w)) was measured on a multidetector-row CT unit using CTDI test objects of 16, 24 and 32 cm in diameter at 80, 100, 120 and 140 kV. These test objects were then scanned in helical mode using a wide range of tube currents and voltages with a reconstructed slice thickness of 5 mm. For each set of acquisition parameter image noise was measured and the Rose model observer was used to test two strategies for proposing a reasonable compromise between dose and low-contrast detection performance: (1) the use of a unique noise level for all test object diameters, and (2) the use of a unique dose efficacy level defined as the noise reduction per unit dose. Published data were used to define four weight classes and an acquisition protocol was proposed for each class. The protocols have been applied in clinical routine for more than one year. CTDI(vol) values of 6.7, 9.4, 15.9 and 24.5 mGy were proposed for the following weight classes: 2.5-5, 5-15, 15-30 and 30-50 kg with image noise levels in the range of 10-15 HU. The proposed method allows patient dose and image noise to be controlled in such a way that dose reduction does not impair the detection of low-contrast lesions. The proposed values correspond to high- quality images and can be reduced if only high-contrast organs are assessed.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Random coefficient regression models have been applied in differentfields and they constitute a unifying setup for many statisticalproblems. The nonparametric study of this model started with Beranand Hall (1992) and it has become a fruitful framework. In thispaper we propose and study statistics for testing a basic hypothesisconcerning this model: the constancy of coefficients. The asymptoticbehavior of the statistics is investigated and bootstrapapproximations are used in order to determine the critical values ofthe test statistics. A simulation study illustrates the performanceof the proposals.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper generalizes the original random matching model of money byKiyotaki and Wright (1989) (KW) in two aspects: first, the economy ischaracterized by an arbitrary distribution of agents who specialize in producing aparticular consumption good; and second, these agents have preferences suchthat they want to consume any good with some probability. The resultsdepend crucially on the size of the fraction of producers of each goodand the probability with which different agents want to consume eachgood. KW and other related models are shown to be parameterizations ofthis more general one.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Confidence in decision making is an important dimension of managerialbehavior. However, what is the relation between confidence, on the onehand, and the fact of receiving or expecting to receive feedback ondecisions taken, on the other hand? To explore this and related issuesin the context of everyday decision making, use was made of the ESM(Experience Sampling Method) to sample decisions taken by undergraduatesand business executives. For several days, participants received 4 or 5SMS messages daily (on their mobile telephones) at random moments at whichpoint they completed brief questionnaires about their current decisionmaking activities. Issues considered here include differences between thetypes of decisions faced by the two groups, their structure, feedback(received and expected), and confidence in decisions taken as well as inthe validity of feedback. No relation was found between confidence indecisions and whether participants received or expected to receivefeedback on those decisions. In addition, although participants areclearly aware that feedback can provide both confirming and disconfirming evidence, their ability to specify appropriatefeedback is imperfect. Finally, difficulties experienced inusing the ESM are discussed as are possibilities for further researchusing this methodology.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Most methods for small-area estimation are based on composite estimators derived from design- or model-based methods. A composite estimator is a linear combination of a direct and an indirect estimator with weights that usually depend on unknown parameters which need to be estimated. Although model-based small-area estimators are usually based on random-effects models, the assumption of fixed effects is at face value more appropriate.Model-based estimators are justified by the assumption of random (interchangeable) area effects; in practice, however, areas are not interchangeable. In the present paper we empirically assess the quality of several small-area estimators in the setting in which the area effects are treated as fixed. We consider two settings: one that draws samples from a theoretical population, and another that draws samples from an empirical population of a labor force register maintained by the National Institute of Social Security (NISS) of Catalonia. We distinguish two types of composite estimators: a) those that use weights that involve area specific estimates of bias and variance; and, b) those that use weights that involve a common variance and a common squared bias estimate for all the areas. We assess their precision and discuss alternatives to optimizing composite estimation in applications.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper proposes a common and tractable framework for analyzingdifferent definitions of fixed and random effects in a contant-slopevariable-intercept model. It is shown that, regardless of whethereffects (i) are treated as parameters or as an error term, (ii) areestimated in different stages of a hierarchical model, or whether (iii)correlation between effects and regressors is allowed, when the sameinformation on effects is introduced into all estimation methods, theresulting slope estimator is also the same across methods. If differentmethods produce different results, it is ultimately because differentinformation is being used for each methods.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Summary points: - The bias introduced by random measurement error will be different depending on whether the error is in an exposure variable (risk factor) or outcome variable (disease) - Random measurement error in an exposure variable will bias the estimates of regression slope coefficients towards the null - Random measurement error in an outcome variable will instead increase the standard error of the estimates and widen the corresponding confidence intervals, making results less likely to be statistically significant - Increasing sample size will help minimise the impact of measurement error in an outcome variable but will only make estimates more precisely wrong when the error is in an exposure variable

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Given the adverse impact of image noise on the perception of important clinical details in digital mammography, routine quality control measurements should include an evaluation of noise. The European Guidelines, for example, employ a second-order polynomial fit of pixel variance as a function of detector air kerma (DAK) to decompose noise into quantum, electronic and fixed pattern (FP) components and assess the DAK range where quantum noise dominates. This work examines the robustness of the polynomial method against an explicit noise decomposition method. The two methods were applied to variance and noise power spectrum (NPS) data from six digital mammography units. Twenty homogeneously exposed images were acquired with PMMA blocks for target DAKs ranging from 6.25 to 1600 µGy. Both methods were explored for the effects of data weighting and squared fit coefficients during the curve fitting, the influence of the additional filter material (2 mm Al versus 40 mm PMMA) and noise de-trending. Finally, spatial stationarity of noise was assessed.Data weighting improved noise model fitting over large DAK ranges, especially at low detector exposures. The polynomial and explicit decompositions generally agreed for quantum and electronic noise but FP noise fraction was consistently underestimated by the polynomial method. Noise decomposition as a function of position in the image showed limited noise stationarity, especially for FP noise; thus the position of the region of interest (ROI) used for noise decomposition may influence fractional noise composition. The ROI area and position used in the Guidelines offer an acceptable estimation of noise components. While there are limitations to the polynomial model, when used with care and with appropriate data weighting, the method offers a simple and robust means of examining the detector noise components as a function of detector exposure.