966 resultados para density estimation


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Based on the conclusions drawn in the bijective transformation between possibility and probability, a method is proposed to estimate the fuzzy membership function for pattern recognition purposes. A rational function approximation to the probability density function is obtained from the histogram of a finite (and sometimes very small) number of samples. This function is normalized such that the highest ordinate is one. The parameters representing the rational function are used for classifying the pattern samples based on a max-min decision rule. The method is illustrated with examples.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The interest in low bit rate video coding has increased considerably. Despite rapid progress in storage density and digital communication system performance, demand for data-transmission bandwidth and storage capacity continue to exceed the capabilities of available technologies. The growth of data-intensive digital audio, video applications and the increased use of bandwidth-limited media such as video conferencing and full motion video have not only sustained the need for efficient ways to encode analog signals, but made signal compression central to digital communication and data-storage technology. In this paper we explore techniques for compression of image sequences in a manner that optimizes the results for the human receiver. We propose a new motion estimator using two novel block match algorithms which are based on human perception. Simulations with image sequences have shown an improved bit rate while maintaining ''image quality'' when compared to conventional motion estimation techniques using the MAD block match criteria.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

It is well known that extremely long low-density parity-check (LDPC) codes perform exceptionally well for error correction applications, short-length codes are preferable in practical applications. However, short-length LDPC codes suffer from performance degradation owing to graph-based impairments such as short cycles, trapping sets and stopping sets and so on in the bipartite graph of the LDPC matrix. In particular, performance degradation at moderate to high E-b/N-0 is caused by the oscillations in bit node a posteriori probabilities induced by short cycles and trapping sets in bipartite graphs. In this study, a computationally efficient algorithm is proposed to improve the performance of short-length LDPC codes at moderate to high E-b/N-0. This algorithm makes use of the information generated by the belief propagation (BP) algorithm in previous iterations before a decoding failure occurs. Using this information, a reliability-based estimation is performed on each bit node to supplement the BP algorithm. The proposed algorithm gives an appreciable coding gain as compared with BP decoding for LDPC codes of a code rate equal to or less than 1/2 rate coding. The coding gains are modest to significant in the case of optimised (for bipartite graph conditioning) regular LDPC codes, whereas the coding gains are huge in the case of unoptimised codes. Hence, this algorithm is useful for relaxing some stringent constraints on the graphical structure of the LDPC code and for developing hardware-friendly designs.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Collective damage of short fatigue cracks was analyzed in the light of equilibrium of crack numerical density. With the estimation of crack growth rate and crack nucleation rate, the solution of the equilibrium equation was studied to reveal the distinct feature of saturation distribution for crack numerical density. The critical time that characterized the transition of short and long-crack regimes was estimated, in which the influences of grain size and grain-boundary obstacle effect were investigated. Furthermore, the total number of cracks and the first order of damage moment were discussed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

It has long been known that various ignition criteria of energetic materials have been limited in applicability to small regions. In order to explore the physical nature of ignition, we calculated how much thermal energy per unit mass of energetic materials was absorbed under different external stimuli. Hence, data of several typical sensitivity tests were analyzed by order of magnitude estimation. Then a new concept on critical thermal energy density was formulated. Meanwhile, the chemical nature of ignition was probed into by chemical kinetics.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Using spatially averaged global model, we succeed in obtaining some plasma parameters for a low pressure inductively coupled plasma source of our laboratory. As far as the global balance is concerned, the models can give reasonable results of the parameters, such as the global electron temperature and the ion impacting energy, etc. It is found that the ion flow is hardly affected by the neutral gas pressure. Finally, the magnetic effects are calculated by means of the method. The magnetic field can play an important role to increase plasma density and ion current.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Providing on line travel time information to commuters has become an important issue for Advanced Traveler Information Systems and Route Guidance Systems in the past years, due to the increasing traffic volume and congestion in the road networks. Travel time is one of the most useful traffic variables because it is more intuitive than other traffic variables such as flow, occupancy or density, and is useful for travelers in decision making. The aim of this paper is to present a global view of the literature on the modeling of travel time, introducing crucial concepts and giving a thorough classification of the existing tech- niques. Most of the attention will focus on travel time estimation and travel time prediction, which are generally not presented together. The main goals of these models, the study areas and methodologies used to carry out these tasks will be further explored and categorized.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The search for reliable proxies of past deep ocean temperature and salinity has proved difficult, thereby limiting our ability to understand the coupling of ocean circulation and climate over glacial-interglacial timescales. Previous inferences of deep ocean temperature and salinity from sediment pore fluid oxygen isotopes and chlorinity indicate that the deep ocean density structure at the Last Glacial Maximum (LGM, approximately 20,000 years BP) was set by salinity, and that the density contrast between northern and southern sourced deep waters was markedly greater than in the modern ocean. High density stratification could help explain the marked contrast in carbon isotope distribution recorded in the LGM ocean relative to that we observe today, but what made the ocean's density structure so different at the LGM? How did it evolve from one state to another? Further, given the sparsity of the LGM temperature and salinity data set, what else can we learn by increasing the spatial density of proxy records?

We investigate the cause and feasibility of a highly and salinity stratified deep ocean at the LGM and we work to increase the amount of information we can glean about the past ocean from pore fluid profiles of oxygen isotopes and chloride. Using a coupled ocean--sea ice--ice shelf cavity model we test whether the deep ocean density structure at the LGM can be explained by ice--ocean interactions over the Antarctic continental shelves, and show that a large contribution of the LGM salinity stratification can be explained through lower ocean temperature. In order to extract the maximum information from pore fluid profiles of oxygen isotopes and chloride we evaluate several inverse methods for ill-posed problems and their ability to recover bottom water histories from sediment pore fluid profiles. We demonstrate that Bayesian Markov Chain Monte Carlo parameter estimation techniques enable us to robustly recover the full solution space of bottom water histories, not only at the LGM, but through the most recent deglaciation and the Holocene up to the present. Finally, we evaluate a non-destructive pore fluid sampling technique, Rhizon samplers, in comparison to traditional squeezing methods and show that despite their promise, Rhizons are unlikely to be a good sampling tool for pore fluid measurements of oxygen isotopes and chloride.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Estimation of the far-field centre is carried out in beam auto-alignment. In this paper, the features of the far-field of a square beam are presented. Based on these features, a phase-only matched filter is designed, and the algorithm of centre estimation is developed. Using the simulated images with different kinds of noise and the 40 test images that are taken in sequence, the accuracy of this algorithm is estimated. Results show that the error is no more than one pixel for simulated noise images with a 99% probability, and the stability is restricted within one pixel for test images. Using the improved algorithm, the consumed time is reduced to 0.049 s.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Demersal groundfish densities were estimated by conducting a visual strip-transect survey via manned submersible on the continental shelf off Cape Flattery, Washington. The purpose of this study was to evaluate the statistical sampling power of the submersible survey as a tool to discriminate density differences between trawlable and untrawlable habitats. A geophysical map of the study area was prepared with side-scan sonar imagery, multibeam bathymetry data, and known locations of historical NMFS trawl survey events. Submersible transects were completed at randomly selected dive sites located in each habitat type. Significant differences in density between habitats were observed for lingcod (Ophiodon elongatus), yelloweye rockfish (Sebastes ruberrimus), and tiger rockfish (S. nigrocinctus) individually, and for “all rockfish” and “all flatfish” in the aggregate. Flatfish were more than ten times as abundant in the trawlable habitat samples than in the untrawlable samples, whereas rockfish as a group were over three times as abundant in the untrawlable habitat samples. Guidelines for sample sizes and implications for the estimation of the continental shelf trawl-survey habitat-bias are considered. We demonstrate an approach that can be used to establish sample size guidelines for future work by illustrating the interplay between statistical sampling power and 1) habitat specific-density differences, 2) variance of density differences, and 3) the proportion of untrawlable area in a habitat.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Wind erosion is one of the major environmental problems in semi-arid and arid regions. Here we established the Tariat-Xilin Gol transect from northwest to southeast across the Mongolian Plateau, and selected seven sampling sites along the transect. We then estimated the soil wind erosion rates by using the Cs-137 tracing technique and examined their spatial dynamics. Our results showed that the Cs-137 inventories of sampling sites ranged from 265.63 +/- 44.91 to 1279.54 +/- 166.53 Bq.m(-2), and the wind erosion rates varied from 64.58 to 419.63 t.km(-2).a(-1) accordingly. In the Mongolia section of the transect (from Tariat to Sainshand), the wind erosion rate increased gradually with vegetation type and climatic regimes; the wind erosion process was controlled by physical factors such as annual precipitation and vegetation coverage, etc., and the impact of human activities was negligible. While in the China section of the transect (Inner Mongolia), the wind erosion rates of Xilin Hot and Zhengxiangbai Banner were thrice as much as those of Bayannur of Mongolia, although these three sites were all dominated by typical steppe. Besides the physical factors, higher population density and livestock carrying level should be responsible for the higher wind erosion rates in these two regions of Inner Mongolia.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The inductively coupled plasma atomic emission, spectrometry (ICP-AES) and its signal characteristics were discussed using modem spectral estimation technique. The power spectra density (PSD) was calculated using the auto-regression (AR) model of modem spectra estimation. The Levinson-Durbin recursion method was used to estimate the model parameters which were used for the PSD computation. The results obtained with actual ICP-AES spectra and measurements showed that the spectral estimation technique was helpful for the better understanding about spectral composition and signal characteristics.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

For two multinormal populations with equal covariance matrices the likelihood ratio discriminant function, an alternative allocation rule to the sample linear discriminant function when n1 ≠ n2 ,is studied analytically. With the assumption of a known covariance matrix its distribution is derived and the expectation of its actual and apparent error rates evaluated and compared with those of the sample linear discriminant function. This comparison indicates that the likelihood ratio allocation rule is robust to unequal sample sizes. The quadratic discriminant function is studied, its distribution reviewed and evaluation of its probabilities of misclassification discussed. For known covariance matrices the distribution of the sample quadratic discriminant function is derived. When the known covariance matrices are proportional exact expressions for the expectation of its actual and apparent error rates are obtained and evaluated. The effectiveness of the sample linear discriminant function for this case is also considered. Estimation of true log-odds for two multinormal populations with equal or unequal covariance matrices is studied. The estimative, Bayesian predictive and a kernel method are compared by evaluating their biases and mean square errors. Some algebraic expressions for these quantities are derived. With equal covariance matrices the predictive method is preferable. Where it derives this superiority is investigated by considering its performance for various levels of fixed true log-odds. It is also shown that the predictive method is sensitive to n1 ≠ n2. For unequal but proportional covariance matrices the unbiased estimative method is preferred. Product Normal kernel density estimates are used to give a kernel estimator of true log-odds. The effect of correlation in the variables with product kernels is considered. With equal covariance matrices the kernel and parametric estimators are compared by simulation. For moderately correlated variables and large dimension sizes the product kernel method is a good estimator of true log-odds.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Empirical modeling of high-frequency currency market data reveals substantial evidence for nonnormality, stochastic volatility, and other nonlinearities. This paper investigates whether an equilibrium monetary model can account for nonlinearities in weekly data. The model incorporates time-nonseparable preferences and a transaction cost technology. Simulated sample paths are generated using Marcet's parameterized expectations procedure. The paper also develops a new method for estimation of structural economic models. The method forces the model to match (under a GMM criterion) the score function of a nonparametric estimate of the conditional density of observed data. The estimation uses weekly U.S.-German currency market data, 1975-90. © 1995.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Based on experimental viscosity data collected from the literature and using density data obtained from a predictive method previously proposed by the authors, a group contribution method is proposed to estimate viscosity of imidazolium-, pyridinium-, and pyrrolidinium-based ILs containing hexafluorophosphate (PF6), tetrafluoroborate (BF4), bis(trifluoromethanesulfonyl) amide (Tf2N), chloride (Cl), acetate (CH3COO), methyl sulfate (MeSO4), ethyl sulfate (EtSO4), and trifluoromethanesulfonate (CF3SO3) anions, covering wide ranges of temperature, 293–393 K and viscosity, 4–21,000 cP. It is shown that a good agreement with literature data is obtained. For circa 500 data points of 29 ILs studied, a mean percent deviation (MPD) of 7.7% with a maximum deviation smaller than 28% was observed. 71.1% of the estimated viscosities present deviations smaller than 10% of the experimental values while only 6.4% have deviations larger than 20%. The group contribution method here developed can thus be used to evaluate the viscosity of new ionic liquids in wide ranges of temperatures at atmospheric pressure and, as data for new groups of cations and anions became available, can be extended to a larger range of ionic liquids.