64 resultados para Classical measurement error model


Relevância:

30.00% 30.00%

Publicador:

Resumo:

A very simple model of a classical particle in a heat bath under the influence of external noise is studied. By means of a suitable hypothesis, the heat bath is reduced to an internal colored noise (OrnsteinUhlenbeck noise). In a second step, an external noise is coupled to the bath. The steady state probability distributions are obtained.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Atès que el referent clàssic "Èdip en cerca de la seva identitat" ha estat sempre reconegut per a Suddenly Last Summer, l'autor d'aquest article, mitjançant una anàlisi acurada del text del dramaturg americà, proposa de llegir en aquest cas Can on a Hot tin Roof des del model Èdip Rei de Sòfocles i descobrir-hi igualment la tradicional ironia clàssica tant des del punt de vista de l'espectador com dels mateixos personatges principals, Brick i el seu pare, ambdós en cerca de la seva veritat, una veritat, és clar, contrària a la que esperaven.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In the simplest model of open inflation there are two inflaton fields decoupled from each other. One of them, the tunneling field, produces a first stage of inflation which prepares the ground for the nucleation of a highly symmetric bubble. The other, a free field, drives a second period of slow-roll inflation inside the bubble. However, the second field also evolves during the first stage of inflation, which to some extent breaks the needed symmetry. We show that this generates large supercurvature anisotropies which, together with the results of Tanaka and Sasaki, rule out this class of simple models (unless, of course, Omega0 is sufficiently close to 1). The problem does not arise in modified models where the second field does not evolve in the first stage of inflation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A general formulation of boundary conditions for semiconductor-metal contacts follows from a phenomenological procedure sketched here. The resulting boundary conditions, which incorporate only physically well-defined parameters, are used to study the classical unipolar drift-diffusion model for the Gunn effect. The analysis of its stationary solutions reveals the presence of bistability and hysteresis for a certain range of contact parameters. Several types of Gunn effect are predicted to occur in the model, when no stable stationary solution exists, depending on the value of the parameters of the injecting contact appearing in the boundary condition. In this way, the critical role played by contacts in the Gunn effect is clearly established.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A common way to model multiclass classification problems is by means of Error-Correcting Output Codes (ECOCs). Given a multiclass problem, the ECOC technique designs a code word for each class, where each position of the code identifies the membership of the class for a given binary problem. A classification decision is obtained by assigning the label of the class with the closest code. One of the main requirements of the ECOC design is that the base classifier is capable of splitting each subgroup of classes from each binary problem. However, we cannot guarantee that a linear classifier model convex regions. Furthermore, nonlinear classifiers also fail to manage some type of surfaces. In this paper, we present a novel strategy to model multiclass classification problems using subclass information in the ECOC framework. Complex problems are solved by splitting the original set of classes into subclasses and embedding the binary problems in a problem-dependent ECOC design. Experimental results show that the proposed splitting procedure yields a better performance when the class overlap or the distribution of the training objects conceal the decision boundaries for the base classifier. The results are even more significant when one has a sufficiently large training size.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

En este documento se ilustra de un modo práctico, el empleo de tres instrumentos que permiten al actuario definir grupos arancelarios y estimar premios de riesgo en el proceso que tasa la clase para el seguro de no vida. El primero es el análisis de segmentación (CHAID y XAID) usado en primer lugar en 1997 por UNESPA en su cartera común de coches. El segundo es un proceso de selección gradual con el modelo de regresión a base de distancia. Y el tercero es un proceso con el modelo conocido y generalizado de regresión linear, que representa la técnica más moderna en la bibliografía actuarial. De estos últimos, si combinamos funciones de eslabón diferentes y distribuciones de error, podemos obtener el aditivo clásico y modelos multiplicativos

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper we analyze the time of ruin in a risk process with the interclaim times being Erlang(n) distributed and a constant dividend barrier. We obtain an integro-differential equation for the Laplace Transform of the time of ruin. Explicit solutions for the moments of the time of ruin are presented when the individual claim amounts have a distribution with rational Laplace transform. Finally, some numerical results and a compare son with the classical risk model, with interclaim times following an exponential distribution, are given.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background: MLPA method is a potentially useful semi-quantitative method to detect copy number alterations in targeted regions. In this paper, we propose a method for the normalization procedure based on a non-linear mixed-model, as well as a new approach for determining the statistical significance of altered probes based on linear mixed-model. This method establishes a threshold by using different tolerance intervals that accommodates the specific random error variability observed in each test sample.Results: Through simulation studies we have shown that our proposed method outperforms two existing methods that are based on simple threshold rules or iterative regression. We have illustrated the method using a controlled MLPA assay in which targeted regions are variable in copy number in individuals suffering from different disorders such as Prader-Willi, DiGeorge or Autism showing the best performace.Conclusion: Using the proposed mixed-model, we are able to determine thresholds to decide whether a region is altered. These threholds are specific for each individual, incorporating experimental variability, resulting in improved sensitivity and specificity as the examples with real data have revealed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Terrestrial laser scanning (TLS) is one of the most promising surveying techniques for rockslope characterization and monitoring. Landslide and rockfall movements can be detected by means of comparison of sequential scans. One of the most pressing challenges of natural hazards is combined temporal and spatial prediction of rockfall. An outdoor experiment was performed to ascertain whether the TLS instrumental error is small enough to enable detection of precursory displacements of millimetric magnitude. This consists of a known displacement of three objects relative to a stable surface. Results show that millimetric changes cannot be detected by the analysis of the unprocessed datasets. Displacement measurement are improved considerably by applying Nearest Neighbour (NN) averaging, which reduces the error (1¿) up to a factor of 6. This technique was applied to displacements prior to the April 2007 rockfall event at Castellfollit de la Roca, Spain. The maximum precursory displacement measured was 45 mm, approximately 2.5 times the standard deviation of the model comparison, hampering the distinction between actual displacement and instrumental error using conventional methodologies. Encouragingly, the precursory displacement was clearly detected by applying the NN averaging method. These results show that millimetric displacements prior to failure can be detected using TLS.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper presents a probabilistic approach to model the problem of power supply voltage fluctuations. Error probability calculations are shown for some 90-nm technology digital circuits.The analysis here considered gives the timing violation error probability as a new design quality factor in front of conventional techniques that assume the full perfection of the circuit. The evaluation of the error bound can be useful for new design paradigms where retry and self-recoveringtechniques are being applied to the design of high performance processors. The method here described allows to evaluate the performance of these techniques by means of calculating the expected error probability in terms of power supply distribution quality.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper is concerned with the derivation of new estimators and performance bounds for the problem of timing estimation of (linearly) digitally modulated signals. The conditional maximum likelihood (CML) method is adopted, in contrast to the classical low-SNR unconditional ML (UML) formulationthat is systematically applied in the literature for the derivationof non-data-aided (NDA) timing-error-detectors (TEDs). A new CML TED is derived and proved to be self-noise free, in contrast to the conventional low-SNR-UML TED. In addition, the paper provides a derivation of the conditional Cramér–Rao Bound (CRB ), which is higher (less optimistic) than the modified CRB (MCRB)[which is only reached by decision-directed (DD) methods]. It is shown that the CRB is a lower bound on the asymptotic statisticalaccuracy of the set of consistent estimators that are quadratic with respect to the received signal. Although the obtained boundis not general, it applies to most NDA synchronizers proposed in the literature. A closed-form expression of the conditional CRBis obtained, and numerical results confirm that the CML TED attains the new bound for moderate to high Eg/No.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This work provides a general framework for the design of second-order blind estimators without adopting anyapproximation about the observation statistics or the a prioridistribution of the parameters. The proposed solution is obtainedminimizing the estimator variance subject to some constraints onthe estimator bias. The resulting optimal estimator is found todepend on the observation fourth-order moments that can be calculatedanalytically from the known signal model. Unfortunately,in most cases, the performance of this estimator is severely limitedby the residual bias inherent to nonlinear estimation problems.To overcome this limitation, the second-order minimum varianceunbiased estimator is deduced from the general solution by assumingaccurate prior information on the vector of parameters.This small-error approximation is adopted to design iterativeestimators or trackers. It is shown that the associated varianceconstitutes the lower bound for the variance of any unbiasedestimator based on the sample covariance matrix.The paper formulation is then applied to track the angle-of-arrival(AoA) of multiple digitally-modulated sources by means ofa uniform linear array. The optimal second-order tracker is comparedwith the classical maximum likelihood (ML) blind methodsthat are shown to be quadratic in the observed data as well. Simulationshave confirmed that the discrete nature of the transmittedsymbols can be exploited to improve considerably the discriminationof near sources in medium-to-high SNR scenarios.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper describes a mesurement system designed to register the displacement of the legs using a two-dimensional laser range sensor with a scanning plane parallel to the ground and extract gait parameters. In the proposed methodology, the position of the legs is estimated by fitting two circles with the laser points that define their contour and the gait parameters are extracted applying a step-line model to the estimated displacement of the legs to reduce uncertainty in the determination of the stand and swing phase of the gait. Results obtained in a range up to 8 m shows that the systematic error in the location of one static leg is lower than 10 mm with and standard deviation lower than 8 mm; this deviation increases to 11 mm in the case of a moving leg. The proposed measurement system has been applied to estimate the gait parameters of six volunteers in a preliminary walking experiment.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Mechanistic soil-crop models have become indispensable tools to investigate the effect of management practices on the productivity or environmental impacts of arable crops. Ideally these models may claim to be universally applicable because they simulate the major processes governing the fate of inputs such as fertiliser nitrogen or pesticides. However, because they deal with complex systems and uncertain phenomena, site-specific calibration is usually a prerequisite to ensure their predictions are realistic. This statement implies that some experimental knowledge on the system to be simulated should be available prior to any modelling attempt, and raises a tremendous limitation to practical applications of models. Because the demand for more general simulation results is high, modellers have nevertheless taken the bold step of extrapolating a model tested within a limited sample of real conditions to a much larger domain. While methodological questions are often disregarded in this extrapolation process, they are specifically addressed in this paper, and in particular the issue of models a priori parameterisation. We thus implemented and tested a standard procedure to parameterize the soil components of a modified version of the CERES models. The procedure converts routinely-available soil properties into functional characteristics by means of pedo-transfer functions. The resulting predictions of soil water and nitrogen dynamics, as well as crop biomass, nitrogen content and leaf area index were compared to observations from trials conducted in five locations across Europe (southern Italy, northern Spain, northern France and northern Germany). In three cases, the model’s performance was judged acceptable when compared to experimental errors on the measurements, based on a test of the model’s root mean squared error (RMSE). Significant deviations between observations and model outputs were however noted in all sites, and could be ascribed to various model routines. In decreasing importance, these were: water balance, the turnover of soil organic matter, and crop N uptake. A better match to field observations could therefore be achieved by visually adjusting related parameters, such as field-capacity water content or the size of soil microbial biomass. As a result, model predictions fell within the measurement errors in all sites for most variables, and the model’s RMSE was within the range of published values for similar tests. We conclude that the proposed a priori method yields acceptable simulations with only a 50% probability, a figure which may be greatly increased through a posteriori calibration. Modellers should thus exercise caution when extrapolating their models to a large sample of pedo-climatic conditions for which they have only limited information.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We present experiments in which the laterally confined flow of a surfactant film driven by controlled surface tension gradients causes the subtended liquid layer to self-organize into an inner upstream microduct surrounded by the downstream flow. The anomalous interfacial flow profiles and the concomitant backflow are a result of the feedback between two-dimensional and three-dimensional microfluidics realized during flow in open microchannels. Bulk and surface particle image velocimetry data combined with an interfacial hydrodynamics model explain the dependence of the observed phenomena on channel geometry.