886 resultados para Statistic nonparametric


Relevância:

20.00% 20.00%

Publicador:

Resumo:

A group of statistical algorithms are proposed for the inversion of the three major components of Case-H waters in the coastal area of the Huanghai Sea and the East China Sea. The algorithms are based on the in situ data collected in the spring of 2003 with strict quality assurance according to NASA ocean bio-optic protocols. These algorithms are the first ones with quantitative confidence that can be applied for the area. The average relative error of the inversed and in situ measured components' concentrations are: Chl-a about 37%, total suspended matter (TSM) about 25%, respectively. This preliminary result is quite satisfactory for Case-H waters, although some aspects in the model need further study. The sensitivity of the input error of 5% to remote sensing reflectance (Rrs) is also analyzed and it shows the algorithms are quite stable. The algorithms show a large difference with Tassan's local SeaWiFS algorithms for different waters, except for the Chl-a algorithm.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this paper we present Statistical Rate Monotonic Scheduling (SRMS), a generalization of the classical RMS results of Liu and Layland that allows scheduling periodic tasks with highly variable execution times and statistical QoS requirements. Similar to RMS, SRMS has two components: a feasibility test and a scheduling algorithm. The feasibility test for SRMS ensures that using SRMS' scheduling algorithms, it is possible for a given periodic task set to share a given resource (e.g. a processor, communication medium, switching device, etc.) in such a way that such sharing does not result in the violation of any of the periodic tasks QoS constraints. The SRMS scheduling algorithm incorporates a number of unique features. First, it allows for fixed priority scheduling that keeps the tasks' value (or importance) independent of their periods. Second, it allows for job admission control, which allows the rejection of jobs that are not guaranteed to finish by their deadlines as soon as they are released, thus enabling the system to take necessary compensating actions. Also, admission control allows the preservation of resources since no time is spent on jobs that will miss their deadlines anyway. Third, SRMS integrates reservation-based and best-effort resource scheduling seamlessly. Reservation-based scheduling ensures the delivery of the minimal requested QoS; best-effort scheduling ensures that unused, reserved bandwidth is not wasted, but rather used to improve QoS further. Fourth, SRMS allows a system to deal gracefully with overload conditions by ensuring a fair deterioration in QoS across all tasks---as opposed to penalizing tasks with longer periods, for example. Finally, SRMS has the added advantage that its schedulability test is simple and its scheduling algorithm has a constant overhead in the sense that the complexity of the scheduler is not dependent on the number of the tasks in the system. We have evaluated SRMS against a number of alternative scheduling algorithms suggested in the literature (e.g. RMS and slack stealing), as well as refinements thereof, which we describe in this paper. Consistently throughout our experiments, SRMS provided the best performance. In addition, to evaluate the optimality of SRMS, we have compared it to an inefficient, yet optimal scheduler for task sets with harmonic periods.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

An incremental, nonparametric probability estimation procedure using the fuzzy ARTMAP neural network is introduced. In slow-learning mode, fuzzy ARTMAP searches for patterns of data on which to build ever more accurate estimates. In max-nodes mode, the network initially learns a fixed number of categories, and weights are then adjusted gradually.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Empirical modeling of high-frequency currency market data reveals substantial evidence for nonnormality, stochastic volatility, and other nonlinearities. This paper investigates whether an equilibrium monetary model can account for nonlinearities in weekly data. The model incorporates time-nonseparable preferences and a transaction cost technology. Simulated sample paths are generated using Marcet's parameterized expectations procedure. The paper also develops a new method for estimation of structural economic models. The method forces the model to match (under a GMM criterion) the score function of a nonparametric estimate of the conditional density of observed data. The estimation uses weekly U.S.-German currency market data, 1975-90. © 1995.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Background: The evaluation of the complexity of an observed object is an old but outstanding problem. In this paper we are tying on this problem introducing a measure called statistic complexity.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A nonparametric, small-sample-size test for the homogeneity of two psychometric functions against the left- and right-shift alternatives has been developed. The test is designed to determine whether it is safe to amalgamate psychometric functions obtained in different experimental sessions. The sum of the lower and upper p-values of the exact (conditional) Fisher test for several 2 × 2 contingency tables (one for each point of the psychometric function) is employed as the test statistic. The probability distribution of the statistic under the null (homogeneity) hypothesis is evaluated to obtain corresponding p-values. Power functions of the test have been computed by randomly generating samples from Weibull psychometric functions. The test is free of any assumptions about the shape of the psychometric function; it requires only that all observations are statistically independent. © 2011 Psychonomic Society, Inc.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In many applications in applied statistics researchers reduce the complexity of a data set by combining a group of variables into a single measure using factor analysis or an index number. We argue that such compression loses information if the data actually has high dimensionality. We advocate the use of a non-parametric estimator, commonly used in physics (the Takens estimator), to estimate the correlation dimension of the data prior to compression. The advantage of this approach over traditional linear data compression approaches is that the data does not have to be linearized. Applying our ideas to the United Nations Human Development Index we find that the four variables that are used in its construction have dimension three and the index loses information.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We consider the local order estimation of nonlinear autoregressive systems with exogenous inputs (NARX), which may have different local dimensions at different points. By minimizing the kernel-based local information criterion introduced in this paper, the strongly consistent estimates for the local orders of the NARX system at points of interest are obtained. The modification of the criterion and a simple procedure of searching the minimum of the criterion, are also discussed. The theoretical results derived here are tested by simulation examples.