900 resultados para Generalized Gaussian-noise


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Agents use their knowledge on the history of the economy in orderto choose what is the optimal action to take at any given moment of time,but each individual observes history with some noise. This paper showsthat the amount of information available on the past evolution of theeconomy is an endogenous variable, and that this leads to overconcentrationof the investment, which can be interpreted as underinvestment in research.It presents a model in which agents have to invest at each period in one of$K$ sectors, each of them paying an exogenous return that follows a welldefined stochastic path. At any moment of time each agent receives an unbiasednoisy signal on the payoff of each sector. The signals differ across agents,but all of them have the same variance, which depends on the aggregate investmentin that particular sector (so that if almost everybody invests in it theperceptions of everybody will be very accurate, but if almost nobody doesthe perceptions of everybody will be very noisy). The degree of hetereogeneityacross agents is then an endogenous variable, evolving across time determining,and being determined by, the amount of information disclosed.As long as both the level of social interaction and the underlying precisionof the observations are relatively large agents behave in a very preciseway. This behavior is unmodified for a huge range of informational parameters,and it is characterized by an excessive concentration of the investment ina few sectors. Additionally the model shows that generalized improvements in thequality of the information that each agent gets may lead to a worse outcomefor all the agents due to the overconcentration of the investment that thisproduces.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We study a novel class of noisy rational expectations equilibria in markets with largenumber of agents. We show that, as long as noise increases with the number of agents inthe economy, the limiting competitive equilibrium is well-defined and leads to non-trivialinformation acquisition, perfect information aggregation, and partially revealing prices,even if per capita noise tends to zero. We find that in such equilibrium risk sharing and price revelation play dierent roles than in the standard limiting economy in which per capita noise is not negligible. We apply our model to study information sales by a monopolist, information acquisition in multi-asset markets, and derivatives trading. Thelimiting equilibria are shown to be perfectly competitive, even when a strategic solutionconcept is used.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

To report the case of a child with short absences and occasional myoclonias since infancy who was first diagnosed with an idiopathic generalized epilepsy, but was documented at follow-up to have a mild phenotype of glucose transporter type 1 deficiency syndrome. Unlike other reported cases of Glut-1 DS and epilepsy, this child had a normal development as well as a normal head growth and neurological examination. Early onset of seizures and later recognized episodes of mild confusion before meals together with persistent atypical EEG features and unexpected learning difficulties led to the diagnosis. Seizure control and neuropsychological improvements were obtained with a ketogenic diet.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The aim was to propose a strategy for finding reasonable compromises between image noise and dose as a function of patient weight. Weighted CT dose index (CTDI(w)) was measured on a multidetector-row CT unit using CTDI test objects of 16, 24 and 32 cm in diameter at 80, 100, 120 and 140 kV. These test objects were then scanned in helical mode using a wide range of tube currents and voltages with a reconstructed slice thickness of 5 mm. For each set of acquisition parameter image noise was measured and the Rose model observer was used to test two strategies for proposing a reasonable compromise between dose and low-contrast detection performance: (1) the use of a unique noise level for all test object diameters, and (2) the use of a unique dose efficacy level defined as the noise reduction per unit dose. Published data were used to define four weight classes and an acquisition protocol was proposed for each class. The protocols have been applied in clinical routine for more than one year. CTDI(vol) values of 6.7, 9.4, 15.9 and 24.5 mGy were proposed for the following weight classes: 2.5-5, 5-15, 15-30 and 30-50 kg with image noise levels in the range of 10-15 HU. The proposed method allows patient dose and image noise to be controlled in such a way that dose reduction does not impair the detection of low-contrast lesions. The proposed values correspond to high- quality images and can be reduced if only high-contrast organs are assessed.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Aim This study used data from temperate forest communities to assess: (1) five different stepwise selection methods with generalized additive models, (2) the effect of weighting absences to ensure a prevalence of 0.5, (3) the effect of limiting absences beyond the environmental envelope defined by presences, (4) four different methods for incorporating spatial autocorrelation, and (5) the effect of integrating an interaction factor defined by a regression tree on the residuals of an initial environmental model. Location State of Vaud, western Switzerland. Methods Generalized additive models (GAMs) were fitted using the grasp package (generalized regression analysis and spatial predictions, http://www.cscf.ch/grasp). Results Model selection based on cross-validation appeared to be the best compromise between model stability and performance (parsimony) among the five methods tested. Weighting absences returned models that perform better than models fitted with the original sample prevalence. This appeared to be mainly due to the impact of very low prevalence values on evaluation statistics. Removing zeroes beyond the range of presences on main environmental gradients changed the set of selected predictors, and potentially their response curve shape. Moreover, removing zeroes slightly improved model performance and stability when compared with the baseline model on the same data set. Incorporating a spatial trend predictor improved model performance and stability significantly. Even better models were obtained when including local spatial autocorrelation. A novel approach to include interactions proved to be an efficient way to account for interactions between all predictors at once. Main conclusions Models and spatial predictions of 18 forest communities were significantly improved by using either: (1) cross-validation as a model selection method, (2) weighted absences, (3) limited absences, (4) predictors accounting for spatial autocorrelation, or (5) a factor variable accounting for interactions between all predictors. The final choice of model strategy should depend on the nature of the available data and the specific study aims. Statistical evaluation is useful in searching for the best modelling practice. However, one should not neglect to consider the shapes and interpretability of response curves, as well as the resulting spatial predictions in the final assessment.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The Generalized Assignment Problem consists in assigning a setof tasks to a set of agents with minimum cost. Each agent hasa limited amount of a single resource and each task must beassigned to one and only one agent, requiring a certain amountof the resource of the agent. We present new metaheuristics forthe generalized assignment problem based on hybrid approaches.One metaheuristic is a MAX-MIN Ant System (MMAS), an improvedversion of the Ant System, which was recently proposed byStutzle and Hoos to combinatorial optimization problems, and itcan be seen has an adaptive sampling algorithm that takes inconsideration the experience gathered in earlier iterations ofthe algorithm. Moreover, the latter heuristic is combined withlocal search and tabu search heuristics to improve the search.A greedy randomized adaptive search heuristic (GRASP) is alsoproposed. Several neighborhoods are studied, including one basedon ejection chains that produces good moves withoutincreasing the computational effort. We present computationalresults of the comparative performance, followed by concludingremarks and ideas on future research in generalized assignmentrelated problems.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A Method is offered that makes it possible to apply generalized canonicalcorrelations analysis (CANCOR) to two or more matrices of different row and column order. The new method optimizes the generalized canonical correlationanalysis objective by considering only the observed values. This is achieved byemploying selection matrices. We present and discuss fit measures to assessthe quality of the solutions. In a simulation study we assess the performance of our new method and compare it to an existing procedure called GENCOM,proposed by Green and Carroll. We find that our new method outperforms the GENCOM algorithm both with respect to model fit and recovery of the truestructure. Moreover, as our new method does not require any type of iteration itis easier to implement and requires less computation. We illustrate the methodby means of an example concerning the relative positions of the political parties inthe Netherlands based on provincial data.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this paper I explore the issue of nonlinearity (both in the datageneration process and in the functional form that establishes therelationship between the parameters and the data) regarding the poorperformance of the Generalized Method of Moments (GMM) in small samples.To this purpose I build a sequence of models starting with a simple linearmodel and enlarging it progressively until I approximate a standard (nonlinear)neoclassical growth model. I then use simulation techniques to find the smallsample distribution of the GMM estimators in each of the models.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Behavioral and brain responses to identical stimuli can vary with experimental and task parameters, including the context of stimulus presentation or attention. More surprisingly, computational models suggest that noise-related random fluctuations in brain responses to stimuli would alone be sufficient to engender perceptual differences between physically identical stimuli. In two experiments combining psychophysics and EEG in healthy humans, we investigated brain mechanisms whereby identical stimuli are (erroneously) perceived as different (higher vs lower in pitch or longer vs shorter in duration) in the absence of any change in the experimental context. Even though, as expected, participants' percepts to identical stimuli varied randomly, a classification algorithm based on a mixture of Gaussians model (GMM) showed that there was sufficient information in single-trial EEG to reliably predict participants' judgments of the stimulus dimension. By contrasting electrical neuroimaging analyses of auditory evoked potentials (AEPs) to the identical stimuli as a function of participants' percepts, we identified the precise timing and neural correlates (strength vs topographic modulations) as well as intracranial sources of these erroneous perceptions. In both experiments, AEP differences first occurred ∼100 ms after stimulus onset and were the result of topographic modulations following from changes in the configuration of active brain networks. Source estimations localized the origin of variations in perceived pitch of identical stimuli within right temporal and left frontal areas and of variations in perceived duration within right temporoparietal areas. We discuss our results in terms of providing neurophysiologic evidence for the contribution of random fluctuations in brain activity to conscious perception.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper presents a general equilibrium model of money demand wherethe velocity of money changes in response to endogenous fluctuations in the interest rate. The parameter space can be divided into two subsets: one where velocity is constant and equal to one as in cash-in-advance models, and another one where velocity fluctuates as in Baumol (1952). Despite its simplicity, in terms of paramaters to calibrate, the model performs surprisingly well. In particular, it approximates the variability of money velocity observed in the U.S. for the post-war period. The model is then used to analyze the welfare costs of inflation under uncertainty. This application calculates the errors derived from computing the costs of inflation with deterministic models. It turns out that the size of this difference is small, at least for the levels of uncertainty estimated for the U.S. economy.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Given the adverse impact of image noise on the perception of important clinical details in digital mammography, routine quality control measurements should include an evaluation of noise. The European Guidelines, for example, employ a second-order polynomial fit of pixel variance as a function of detector air kerma (DAK) to decompose noise into quantum, electronic and fixed pattern (FP) components and assess the DAK range where quantum noise dominates. This work examines the robustness of the polynomial method against an explicit noise decomposition method. The two methods were applied to variance and noise power spectrum (NPS) data from six digital mammography units. Twenty homogeneously exposed images were acquired with PMMA blocks for target DAKs ranging from 6.25 to 1600 µGy. Both methods were explored for the effects of data weighting and squared fit coefficients during the curve fitting, the influence of the additional filter material (2 mm Al versus 40 mm PMMA) and noise de-trending. Finally, spatial stationarity of noise was assessed.Data weighting improved noise model fitting over large DAK ranges, especially at low detector exposures. The polynomial and explicit decompositions generally agreed for quantum and electronic noise but FP noise fraction was consistently underestimated by the polynomial method. Noise decomposition as a function of position in the image showed limited noise stationarity, especially for FP noise; thus the position of the region of interest (ROI) used for noise decomposition may influence fractional noise composition. The ROI area and position used in the Guidelines offer an acceptable estimation of noise components. While there are limitations to the polynomial model, when used with care and with appropriate data weighting, the method offers a simple and robust means of examining the detector noise components as a function of detector exposure.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this study, we assessed the mixed exposure of highway maintenance workers to airborne particles, noise, and gaseous co-pollutants. The aim was to provide a better understanding of the workers' exposure to facilitate the evaluation of short-term effects on cardiovascular health endpoints. To quantify the workers' exposure, we monitored 18 subjects during 50 non-consecutive work shifts. Exposure assessment was based on personal and work site measurements and included fine particulate matter (PM2.5), particle number concentration (PNC), noise (Leq), and the gaseous co-pollutants: carbon monoxide, nitrogen dioxide, and ozone. Mean work shift PM2.5 concentrations (gravimetric measurements) ranged from 20.3 to 321 μg m(-3) (mean 62 μg m(-3)) and PNC were between 1.6×10(4) and 4.1×10(5) particles cm(-3) (8.9×10(4) particles cm(-3)). Noise levels were generally high with Leq over work shifts from 73.3 to 96.0 dB(A); the averaged Leq over all work shifts was 87.2 dB(A). The highest exposure to fine and ultrafine particles was measured during grass mowing and lumbering when motorized brush cutters and chain saws were used. Highest noise levels, caused by pneumatic hammers, were measured during paving and guardrail repair. We found moderate Spearman correlations between PNC and PM2.5 (r = 0.56); PNC, PM2.5, and CO (r = 0.60 and r = 0.50) as well as PNC and noise (r = 0.50). Variability and correlation of parameters were influenced by work activities that included equipment causing combined air pollutant and noise emissions (e.g. brush cutters and chain saws). We conclude that highway maintenance workers are frequently exposed to elevated airborne particle and noise levels compared with the average population. This elevated exposure is a consequence of the permanent proximity to highway traffic with additional peak exposures caused by emissions of the work-related equipment.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In a previous paper a novel Generalized Multiobjective Multitree model (GMM-model) was proposed. This model considers for the first time multitree-multicast load balancing with splitting in a multiobjective context, whose mathematical solution is a whole Pareto optimal set that can include several results than it has been possible to find in the publications surveyed. To solve the GMM-model, in this paper a multi-objective evolutionary algorithm (MOEA) inspired by the Strength Pareto Evolutionary Algorithm (SPEA) is proposed. Experimental results considering up to 11 different objectives are presented for the well-known NSF network, with two simultaneous data flows