973 resultados para Error in substance


Relevância:

90.00% 90.00%

Publicador:

Resumo:

In this paper, a nonlinear suboptimal detector whose performance in heavy-tailed noise is significantly better than that of the matched filter is proposed. The detector consists of a nonlinear wavelet denoising filter to enhance the signal-to-noise ratio, followed by a replica correlator. Performance of the detector is investigated through an asymptotic theoretical analysis as well as Monte Carlo simulations. The proposed detector offers the following advantages over the optimal (in the Neyman-Pearson sense) detector: it is easier to implement, and it is more robust with respect to error in modeling the probability distribution of noise.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This paper considers the problem of channel estimation at the transmitter in a spatial multiplexing-based Time Division Duplex (TDD) Multiple Input Multiple Output (MIMO) system with perfect CSIR. A novel channel-dependent Reverse Channel Training (RCT) sequence is proposed, using which the transmitter estimates the beamforming vectors for forward link data transmission. This training sequence is designed based on the following two metrics: (i) a capacity lower bound, and (ii) the mean square error in the estimate. The performance of the proposed training scheme is analyzed and is shown to significantly outperform the conventional orthogonal RCT sequence. Also, in the case where the transmitter uses water-filling power allocation for data transmission, a novel RCT sequence is proposed and optimized with respect to the MSE in estimating the transmit covariance matrix.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Electromagnetic Articulography (EMA) technique is used to record the kinematics of different articulators while one speaks. EMA data often contains missing segments due to sensor failure. In this work, we propose a maximum a-posteriori (MAP) estimation with continuity constraint to recover the missing samples in the articulatory trajectories recorded using EMA. In this approach, we combine the benefits of statistical MAP estimation as well as the temporal continuity of the articulatory trajectories. Experiments on articulatory corpus using different missing segment durations show that the proposed continuity constraint results in a 30% reduction in average root mean squared error in estimation over statistical estimation of missing segments without any continuity constraint.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Regionalization of extreme rainfall is useful for various applications in hydro-meteorology. There is dearth of regionalization studies on extreme rainfall in India. In this perspective, a set of 25 regions that are homogeneous in 1-, 2-, 3-, 4- and 5-day extreme rainfall is delineated based on seasonality measure of extreme rainfall and location indicators (latitude, longitude and altitude) by using global fuzzy c-means (GFCM) cluster analysis. The regions are validated for homogeneity in L-moment framework. One of the applications of the regions is in arriving at quantile estimates of extreme rainfall at sparsely gauged/ungauged locations using options such as regional frequency analysis (RFA). The RFA involves use of rainfall-related information from gauged sites in a region as the basis to estimate quantiles of extreme rainfall for target locations that resemble the region in terms of rainfall characteristics. A procedure for RFA based on GFCM-delineated regions is presented and its effectiveness is evaluated by leave-one-out cross validation. Error in quantile estimates for ungauged sites is compared with that resulting from the use of region-of-influence (ROI) approach that forms site-specific regions exclusively for quantile estimation. Results indicate that error in quantile estimates based on GFCM regions and ROI are fairly close, and neither of them is consistent in yielding the least error over all the sites. The cluster analysis approach was effective in reducing the number of regions to be delineated for RFA.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The extinction cross sections of a system containing two particles are calculated by the T-matrix method, and the results are compared with those of two single particles with single-scattering approximation. The necessity of the correction of the refractive indices of water and polystyrene for different incident wavelengths is particularly addressed in the calculation. By this means, the volume fractions allowed for certain accuracy requirements of single-scattering approximation in the light scattering experiment can be evaluated. The volume fractions calculated with corrected refractive indices are compared with those obtained with fixed refractive indices which have been rather commonly used, showing that fixed refractive indices may cause significant error in evaluating multiple scattering effect. The results also give a simple criterion for selecting the incident wavelength and particle size to avoid the 'blind zone' in the turbidity measurement, where the turbidity change is insensitive to aggregation of two particles.

Relevância:

90.00% 90.00%

Publicador:

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The small-scale motions relevant to the collision of heavy particles represent a general challenge to the conventional large-eddy simulation (LES) of turbulent particle-laden flows. As a first step toward addressing this challenge, we examine the capability of the LES method with an eddy viscosity subgrid scale (SGS) model to predict the collision-related statistics such as the particle radial distribution function at contact, the radial relative velocity at contact, and the collision rate for a wide range of particle Stokes numbers. Data from direct numerical simulation (DNS) are used as a benchmark to evaluate the LES using both a priori and a posteriori tests. It is shown that, without the SGS motions, LES cannot accurately predict the particle-pair statistics for heavy particles with small and intermediate Stokes numbers, and a large relative error in collision rate up to 60% may arise when the particle Stokes number is near St_K=0.5. The errors from the filtering operation and the SGS model are evaluated separately using the filtered-DNS (FDNS) and LES flow fields. The errors increase with the filter width and have nonmonotonic variations with the particle Stokes numbers. It is concluded that the error due to filtering dominates the overall error in LES for most particle Stokes numbers. It is found that the overall collision rate can be reasonably predicted by both FDNS and LES for St_K>3. Our analysis suggests that, for St_K<3, a particle SGS model must include the effects of SGS motions on the turbulent collision of heavy particles. The spectral analysis of the concentration fields of the particles with different Stokes numbers further demonstrates the important effects of the small-scale motions on the preferential concentration of the particles with small Stokes numbers.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This thesis is a theoretical work on the space-time dynamic behavior of a nuclear reactor without feedback. Diffusion theory with G-energy groups is used.

In the first part the accuracy of the point kinetics (lumped-parameter description) model is examined. The fundamental approximation of this model is the splitting of the neutron density into a product of a known function of space and an unknown function of time; then the properties of the system can be averaged in space through the use of appropriate weighting functions; as a result a set of ordinary differential equations is obtained for the description of time behavior. It is clear that changes of the shape of the neutron-density distribution due to space-dependent perturbations are neglected. This results to an error in the eigenvalues and it is to this error that bounds are derived. This is done by using the method of weighted residuals to reduce the original eigenvalue problem to that of a real asymmetric matrix. Then Gershgorin-type theorems .are used to find discs in the complex plane in which the eigenvalues are contained. The radii of the discs depend on the perturbation in a simple manner.

In the second part the effect of delayed neutrons on the eigenvalues of the group-diffusion operator is examined. The delayed neutrons cause a shifting of the prompt-neutron eigenvalue s and the appearance of the delayed eigenvalues. Using a simple perturbation method this shifting is calculated and the delayed eigenvalues are predicted with good accuracy.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Based on the generalized Huygens-Fresnel diffraction integral theory and the stationary-phase method, we analyze the influence on diffraction-free beam patterns of an elliptical manufacture error in an axicon. The numerical simulation is compared with the beam patterns photographed by using a CCD camera. Theoretical simulation and experimental results indicate that the intensity of the central spot decreases with increasing elliptical manufacture defect and propagation distance. Meanwhile, the bright rings around the central spot are gradually split into four or more symmetrical bright spots. The experimental results fit the theoretical simulation very well. (C) 2008 Society of Photo-Optical Instrumentation Engineers.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

English: We describe an age-structured statistical catch-at-length analysis (A-SCALA) based on the MULTIFAN-CL model of Fournier et al. (1998). The analysis is applied independently to both the yellowfin and the bigeye tuna populations of the eastern Pacific Ocean (EPO). We model the populations from 1975 to 1999, based on quarterly time steps. Only a single stock for each species is assumed for each analysis, but multiple fisheries that are spatially separate are modeled to allow for spatial differences in catchability and selectivity. The analysis allows for error in the effort-fishing mortality relationship, temporal trends in catchability, temporal variation in recruitment, relationships between the environment and recruitment and between the environment and catchability, and differences in selectivity and catchability among fisheries. The model is fit to total catch data and proportional catch-at-length data conditioned on effort. The A-SCALA method is a statistical approach, and therefore recognizes that the data collected from the fishery do not perfectly represent the population. Also, there is uncertainty in our knowledge about the dynamics of the system and uncertainty about how the observed data relate to the real population. The use of likelihood functions allow us to model the uncertainty in the data collected from the population, and the inclusion of estimable process error allows us to model the uncertainties in the dynamics of the system. The statistical approach allows for the calculation of confidence intervals and the testing of hypotheses. We use a Bayesian version of the maximum likelihood framework that includes distributional constraints on temporal variation in recruitment, the effort-fishing mortality relationship, and catchability. Curvature penalties for selectivity parameters and penalties on extreme fishing mortality rates are also included in the objective function. The mode of the joint posterior distribution is used as an estimate of the model parameters. Confidence intervals are calculated using the normal approximation method. It should be noted that the estimation method includes constraints and priors and therefore the confidence intervals are different from traditionally calculated confidence intervals. Management reference points are calculated, and forward projections are carried out to provide advice for making management decisions for the yellowfin and bigeye populations. Spanish: Describimos un análisis estadístico de captura a talla estructurado por edad, A-SCALA (del inglés age-structured statistical catch-at-length analysis), basado en el modelo MULTIFAN- CL de Fournier et al. (1998). Se aplica el análisis independientemente a las poblaciones de atunes aleta amarilla y patudo del Océano Pacífico oriental (OPO). Modelamos las poblaciones de 1975 a 1999, en pasos trimestrales. Se supone solamente una sola población para cada especie para cada análisis, pero se modelan pesquerías múltiples espacialmente separadas para tomar en cuenta diferencias espaciales en la capturabilidad y selectividad. El análisis toma en cuenta error en la relación esfuerzo-mortalidad por pesca, tendencias temporales en la capturabilidad, variación temporal en el reclutamiento, relaciones entre el medio ambiente y el reclutamiento y entre el medio ambiente y la capturabilidad, y diferencias en selectividad y capturabilidad entre pesquerías. Se ajusta el modelo a datos de captura total y a datos de captura a talla proporcional condicionados sobre esfuerzo. El método A-SCALA es un enfoque estadístico, y reconoce por lo tanto que los datos obtenidos de la pesca no representan la población perfectamente. Además, hay incertidumbre en nuestros conocimientos de la dinámica del sistema e incertidumbre sobre la relación entre los datos observados y la población real. El uso de funciones de verosimilitud nos permite modelar la incertidumbre en los datos obtenidos de la población, y la inclusión de un error de proceso estimable nos permite modelar las incertidumbres en la dinámica del sistema. El enfoque estadístico permite calcular intervalos de confianza y comprobar hipótesis. Usamos una versión bayesiana del marco de verosimilitud máxima que incluye constreñimientos distribucionales sobre la variación temporal en el reclutamiento, la relación esfuerzo-mortalidad por pesca, y la capturabilidad. Se incluyen también en la función objetivo penalidades por curvatura para los parámetros de selectividad y penalidades por tasas extremas de mortalidad por pesca. Se usa la moda de la distribución posterior conjunta como estimación de los parámetros del modelo. Se calculan los intervalos de confianza usando el método de aproximación normal. Cabe destacar que el método de estimación incluye constreñimientos y distribuciones previas y por lo tanto los intervalos de confianza son diferentes de los intervalos de confianza calculados de forma tradicional. Se calculan puntos de referencia para el ordenamiento, y se realizan proyecciones a futuro para asesorar la toma de decisiones para el ordenamiento de las poblaciones de aleta amarilla y patudo.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

During the last century, the population of Pacific sardine (Sardinops sagax) in the California Current Ecosystem has exhibited large fluctuations in abundance and migration behavior. From approximately 1900 to 1940, the abundance of sardine reached 3.6 million metric tons and the “northern stock” migrated from offshore of California in the spring to the coastal areas near Oregon, Washington, and Vancouver Island in the summer. In the 1940s, the sardine stock collapsed and the few remaining sardine schools concentrated in the coastal region off southern California, year-round, for the next 50 years. The stock gradually recovered in the late 1980s and resumed its seasonal migration between regions off southern California and Canada. Recently, a model was developed which predicts the potential habitat for the northern stock of Pacific sardine and its seasonal dynamics. The habitat predictions were successfully validated using data from sardine surveys using the daily egg production method; scientific trawl surveys off the Columbia River mouth; and commercial sardine landings off Oregon, Washington, and Vancouver Island. Here, the predictions of the potential habitat and seasonal migration of the northern stock of sardine are validated using data from “acoustic–trawl” surveys of the entire west coast of the United States during the spring and summer of 2008. The estimates of sardine biomass and lengths from the two surveys are not significantly different between spring and summer, indicating that they are representative of the entire stock. The results also confirm that the model of potential sardine habitat can be used to optimally apply survey effort and thus minimize random and systematic sampling error in the biomass estimates. Furthermore, the acoustic–trawl survey data are useful to estimate concurrently the distributions and abundances of other pelagic fishes.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

We describe the application of two types of stereo camera systems in fisheries research, including the design, calibration, analysis techniques, and precision of the data obtained with these systems. The first is a stereo video system deployed by using a quick-responding winch with a live feed to provide species- and size- composition data adequate to produce acoustically based biomass estimates of rockfish. This system was tested on the eastern Bering Sea slope where rockfish were measured. Rockfish sizes were similar to those sampled with a bottom trawl and the relative error in multiple measurements of the same rockfish in multiple still-frame images was small. Measurement errors of up to 5.5% were found on a calibration target of known size. The second system consisted of a pair of still-image digital cameras mounted inside a midwater trawl. Processing of the stereo images allowed fish length, fish orientation in relation to the camera platform, and relative distance of the fish to the trawl netting to be determined. The video system was useful for surveying fish in Alaska, but it could also be used broadly in other situations where it is difficult to obtain species-composition or size-composition information. Likewise, the still-image system could be used for fisheries research to obtain data on size, position, and orientation of fish.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Body length measurement is an important part of growth, condition, and mortality analyses of larval and juvenile fish. If the measurements are not accurate (i.e., do not reflect real fish length), results of subsequent analyses may be affected considerably (McGurk, 1985; Fey, 1999; Porter et al., 2001). The primary cause of error in fish length measurement is shrinkage related to collection and preservation (Theilacker, 1980; Hay, 1981; Butler, 1992; Fey, 1999). The magnitude of shrinkage depends on many factors, namely the duration and speed of the collection tow, abundance of other planktonic organisms in the sample (Theilacker, 1980; Hay, 1981; Jennings, 1991), the type and strength of the preservative (Hay, 1982), and the species of fish (Jennings, 1991; Fey, 1999). Further, fish size affects shrinkage (Fowler and Smith, 1983; Fey, 1999, 2001), indicating that live length should be modeled as a function of preserved length (Pepin et al., 1998; Fey, 1999).

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Recently, we demonstrated that humans can learn to make accurate movements in an unstable environment by controlling magnitude, shape, and orientation of the endpoint impedance. Although previous studies of human motor learning suggest that the brain acquires an inverse dynamics model of the novel environment, it is not known whether this control mechanism is operative in unstable environments. We compared learning of multijoint arm movements in a "velocity-dependent force field" (VF), which interacted with the arm in a stable manner, and learning in a "divergent force field" (DF), where the interaction was unstable. The characteristics of error evolution were markedly different in the 2 fields. The direction of trajectory error in the DF alternated to the left and right during the early stage of learning; that is, signed error was inconsistent from movement to movement and could not have guided learning of an inverse dynamics model. This contrasted sharply with trajectory error in the VF, which was initially biased and decayed in a manner that was consistent with rapid feedback error learning. EMG recorded before and after learning in the DF and VF are also consistent with different learning and control mechanisms for adapting to stable and unstable dynamics, that is, inverse dynamics model formation and impedance control. We also investigated adaptation to a rotated DF to examine the interplay between inverse dynamics model formation and impedance control. Our results suggest that an inverse dynamics model can function in parallel with an impedance controller to compensate for consistent perturbing force in unstable environments.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Recent studies examining adaptation to unexpected changes in the mechanical environment highlight the use of position error in the adaptation process. However, force information is also available. In this chapter, we examine adaptation processes in three separate studies where the mechanical environment was changed intermittently. We compare the expected consequences of using position error and force information in the changes to motor commands following a change in the mechanical environment. In general, our results support the use of position error over force information and are consistent with current computational models of motor learning. However, in situations where the change in the mechanical environment eliminates position error the central nervous system does not necessarily respond as would be predicted by these models. We suggest that it is necessary to take into account the statistics of prior experience to account for our observations. Another deficiency in these models is the absence of a mechanism for modulating limb mechanical impedance during adaptation. We propose a relatively simple computational model based on reflex responses to perturbations which is capable of accounting for iterative changes in temporal patterns of muscle co-activation.