930 resultados para Bit error rate algorithm
Resumo:
This paper presents a new methodology for measurement of the instantaneous average exhaust mass flow rate in reciprocating internal combustion engines to be used to determinate real driving emissions on light duty vehicles, as part of a Portable Emission Measurement System (PEMS). Firstly a flow meter, named MIVECO flow meter, was designed based on a Pitot tube adapted to exhaust gases which are characterized by moisture and particle content, rapid changes in flow rate and chemical composition, pulsating and reverse flow at very low engine speed. Then, an off-line methodology was developed to calculate the instantaneous average flow, considering the ?square root error? phenomenon. The paper includes the theoretical fundamentals, the developed flow meter specifications, the calibration tests, the description of the proposed off-line methodology and the results of the validation test carried out in a chassis dynamometer, where the validity of the mass flow meter and the methodology developed are demonstrated.
Resumo:
We propose a general procedure for solving incomplete data estimation problems. The procedure can be used to find the maximum likelihood estimate or to solve estimating equations in difficult cases such as estimation with the censored or truncated regression model, the nonlinear structural measurement error model, and the random effects model. The procedure is based on the general principle of stochastic approximation and the Markov chain Monte-Carlo method. Applying the theory on adaptive algorithms, we derive conditions under which the proposed procedure converges. Simulation studies also indicate that the proposed procedure consistently converges to the maximum likelihood estimate for the structural measurement error logistic regression model.
Resumo:
The mutagenic effect of low linear energy transfer ionizing radiation is reduced for a given dose as the dose rate (DR) is reduced to a low level, a phenomenon known as the direct DR effect. Our reanalysis of published data shows that for both somatic and germ-line mutations there is an opposite, inverse DR effect, with reduction from low to very low DR, the overall dependence of induced mutations being parabolically related to DR, with a minimum in the range of 0.1 to 1.0 cGy/min (rule 1). This general pattern can be attributed to an optimal induction of error-free DNA repair in a DR region of minimal mutability (MMDR region). The diminished activation of repair at very low DRs may reflect a low ratio of induced (“signal”) to spontaneous background DNA damage (“noise”). Because two common DNA lesions, 8-oxoguanine and thymine glycol, were already known to activate repair in irradiated mammalian cells, we estimated how their rates of production are altered upon radiation exposure in the MMDR region. For these and other abundant lesions (abasic sites and single-strand breaks), the DNA damage rate increment in the MMDR region is in the range of 10% to 100% (rule 2). These estimates suggest a genetically programmed optimatization of response to radiation in the MMDR region.
Resumo:
In the analysis of heart rate variability (HRV) are used temporal series that contains the distances between successive heartbeats in order to assess autonomic regulation of the cardiovascular system. These series are obtained from the electrocardiogram (ECG) signal analysis, which can be affected by different types of artifacts leading to incorrect interpretations in the analysis of the HRV signals. Classic approach to deal with these artifacts implies the use of correction methods, some of them based on interpolation, substitution or statistical techniques. However, there are few studies that shows the accuracy and performance of these correction methods on real HRV signals. This study aims to determine the performance of some linear and non-linear correction methods on HRV signals with induced artefacts by quantification of its linear and nonlinear HRV parameters. As part of the methodology, ECG signals of rats measured using the technique of telemetry were used to generate real heart rate variability signals without any error. In these series were simulated missing points (beats) in different quantities in order to emulate a real experimental situation as accurately as possible. In order to compare recovering efficiency, deletion (DEL), linear interpolation (LI), cubic spline interpolation (CI), moving average window (MAW) and nonlinear predictive interpolation (NPI) were used as correction methods for the series with induced artifacts. The accuracy of each correction method was known through the results obtained after the measurement of the mean value of the series (AVNN), standard deviation (SDNN), root mean square error of the differences between successive heartbeats (RMSSD), Lomb\'s periodogram (LSP), Detrended Fluctuation Analysis (DFA), multiscale entropy (MSE) and symbolic dynamics (SD) on each HRV signal with and without artifacts. The results show that, at low levels of missing points the performance of all correction techniques are very similar with very close values for each HRV parameter. However, at higher levels of losses only the NPI method allows to obtain HRV parameters with low error values and low quantity of significant differences in comparison to the values calculated for the same signals without the presence of missing points.
Resumo:
We studied the global and local ℳ-Z relation based on the first data available from the CALIFA survey (150 galaxies). This survey provides integral field spectroscopy of the complete optical extent of each galaxy (up to 2−3 effective radii), with a resolution high enough to separate individual H II regions and/or aggregations. About 3000 individual H II regions have been detected. The spectra cover the wavelength range between [OII]3727 and [SII]6731, with a sufficient signal-to-noise ratio to derive the oxygen abundance and star-formation rate associated with each region. In addition, we computed the integrated and spatially resolved stellar masses (and surface densities) based on SDSS photometric data. We explore the relations between the stellar mass, oxygen abundance and star-formation rate using this dataset. We derive a tight relation between the integrated stellar mass and the gas-phase abundance, with a dispersion lower than the one already reported in the literature (σ_Δlog (O/H) = 0.07 dex). Indeed, this dispersion is only slightly higher than the typical error derived for our oxygen abundances. However, we found no secondary relation with the star-formation rate other than the one induced by the primary relation of this quantity with the stellar mass. The analysis for our sample of ~3000 individual H II regions confirms (i) a local mass-metallicity relation and (ii) the lack of a secondary relation with the star-formation rate. The same analysis was performed with similar results for the specific star-formation rate. Our results agree with the scenario in which gas recycling in galaxies, both locally and globally, is much faster than other typical timescales, such like that of gas accretion by inflow and/or metal loss due to outflows. In essence, late-type/disk-dominated galaxies seem to be in a quasi-steady situation, with a behavior similar to the one expected from an instantaneous recycling/closed-box model.
Resumo:
Purpose. To evaluate theoretically in normal eyes the influence on IOL power (PIOL) calculation of the use of a keratometric index (nk) and to analyze and validate preliminarily the use of an adjusted keratometric index (nkadj) in the IOL power calculation (PIOLadj). Methods. A model of variable keratometric index (nkadj) for corneal power calculation (Pc) was used for IOL power calculation (named PIOLadj). Theoretical differences ($PIOL) between the new proposed formula (PIOLadj) and which is obtained through Gaussian optics (PIOL Gauss) were determined using Gullstrand and Le Grand eye models. The proposed new formula for IOL power calculation (PIOLadj) was prevalidated clinically in 81 eyes of 81 candidates for corneal refractive surgery and compared with Haigis, HofferQ, Holladay, and SRK/T formulas. Results. A theoretical PIOL underestimation greater than 0.5 diopters was present in most of the cases when nk = 1.3375 was used. If nkadj was used for Pc calculation, a maximal calculated error in $PIOL of T0.5 diopters at corneal vertex in most cases was observed independently from the eye model, r1c, and the desired postoperative refraction. The use of nkadj in IOL power calculation (PIOLadj) could be valid with effective lens position optimization nondependent of the corneal power. Conclusions. The use of a single value of nk for Pc calculation can lead to significant errors in PIOL calculation that may explain some IOL power overestimations with conventional formulas. These inaccuracies can be minimized by using the new PIOLadj based on the algorithm of nkadj.
Resumo:
The Iterative Closest Point algorithm (ICP) is commonly used in engineering applications to solve the rigid registration problem of partially overlapped point sets which are pre-aligned with a coarse estimate of their relative positions. This iterative algorithm is applied in many areas such as the medicine for volumetric reconstruction of tomography data, in robotics to reconstruct surfaces or scenes using range sensor information, in industrial systems for quality control of manufactured objects or even in biology to study the structure and folding of proteins. One of the algorithm’s main problems is its high computational complexity (quadratic in the number of points with the non-optimized original variant) in a context where high density point sets, acquired by high resolution scanners, are processed. Many variants have been proposed in the literature whose goal is the performance improvement either by reducing the number of points or the required iterations or even enhancing the complexity of the most expensive phase: the closest neighbor search. In spite of decreasing its complexity, some of the variants tend to have a negative impact on the final registration precision or the convergence domain thus limiting the possible application scenarios. The goal of this work is the improvement of the algorithm’s computational cost so that a wider range of computationally demanding problems from among the ones described before can be addressed. For that purpose, an experimental and mathematical convergence analysis and validation of point-to-point distance metrics has been performed taking into account those distances with lower computational cost than the Euclidean one, which is used as the de facto standard for the algorithm’s implementations in the literature. In that analysis, the functioning of the algorithm in diverse topological spaces, characterized by different metrics, has been studied to check the convergence, efficacy and cost of the method in order to determine the one which offers the best results. Given that the distance calculation represents a significant part of the whole set of computations performed by the algorithm, it is expected that any reduction of that operation affects significantly and positively the overall performance of the method. As a result, a performance improvement has been achieved by the application of those reduced cost metrics whose quality in terms of convergence and error has been analyzed and validated experimentally as comparable with respect to the Euclidean distance using a heterogeneous set of objects, scenarios and initial situations.
Resumo:
Purpose: To evaluate the predictability of the refractive correction achieved with a positional accommodating intraocular lenses (IOL) and to develop a potential optimization of it by minimizing the error associated with the keratometric estimation of the corneal power and by developing a predictive formula for the effective lens position (ELP). Materials and Methods: Clinical data from 25 eyes of 14 patients (age range, 52–77 years) and undergoing cataract surgery with implantation of the accommodating IOL Crystalens HD (Bausch and Lomb) were retrospectively reviewed. In all cases, the calculation of an adjusted IOL power (PIOLadj) based on Gaussian optics considering the residual refractive error was done using a variable keratometric index value (nkadj) for corneal power estimation with and without using an estimation algorithm for ELP obtained by multiple regression analysis (ELPadj). PIOLadj was compared to the real IOL power implanted (PIOLReal, calculated with the SRK-T formula) and also to the values estimated by the Haigis, HofferQ, and Holladay I formulas. Results: No statistically significant differences were found between PIOLReal and PIOLadj when ELPadj was used (P = 0.10), with a range of agreement between calculations of 1.23 D. In contrast, PIOLReal was significantly higher when compared to PIOLadj without using ELPadj and also compared to the values estimated by the other formulas. Conclusions: Predictable refractive outcomes can be obtained with the accommodating IOL Crystalens HD using a variable keratometric index for corneal power estimation and by estimating ELP with an algorithm dependent on anatomical factors and age.
Resumo:
Climate conditions in the westernmost Mediterranean (Alboran Sea basin) over the last two millennia have been reconstructed through integration of molecular proxies applied for the first time in this region at such high resolution. Two temperature proxies, one based on isoprenoid membrane lipids of marine Thaumarchaeota (TEXH86-tetraether index of compounds consisting of 86 carbons) and the other on alkenones produced by haptophytes (UK'37 ratio) were applied to reconstruct sea surface temperature (SST). Both records reveal a progressive long term decline in SST over the last two millennia and an increased rate of warming during the second half of the twentieth century. This is in accord with previous temperature reconstructions for the Northern Hemisphere. TEXH86 temperature values are higher than those inferred from UK'37, probably due to differences in the bloom season of haptophytes and Thaumarchaeota, and reflect summer SST. The branched vs. isoprenoid tetraether index (BIT index) suggests a low contribution of soil organic matter (OM) to the sedimentary OM. The stable carbon isotopic composition of long chain n-alkanes indicates a predominant C3 plant contribution, with no major change in vegetation over the last 2000 yr. The distribution of long chain 1,14-diols (most likely sourced by Proboscia species in this setting) provided insight into variation in upwelling conditions during the last 2000 yr and depicts a correlation with the North Atlantic Oscillation (NAO) index, providing evidence of enhanced wind induced upwelling during periods of a persistent positive mode of the NAO.
Resumo:
Thesis (Ph.D.)--University of Washington, 2016-06
Resumo:
Mixture models implemented via the expectation-maximization (EM) algorithm are being increasingly used in a wide range of problems in pattern recognition such as image segmentation. However, the EM algorithm requires considerable computational time in its application to huge data sets such as a three-dimensional magnetic resonance (MR) image of over 10 million voxels. Recently, it was shown that a sparse, incremental version of the EM algorithm could improve its rate of convergence. In this paper, we show how this modified EM algorithm can be speeded up further by adopting a multiresolution kd-tree structure in performing the E-step. The proposed algorithm outperforms some other variants of the EM algorithm for segmenting MR images of the human brain. (C) 2004 Pattern Recognition Society. Published by Elsevier Ltd. All rights reserved.
Resumo:
The expectation-maximization (EM) algorithm has been of considerable interest in recent years as the basis for various algorithms in application areas of neural networks such as pattern recognition. However, there exists some misconceptions concerning its application to neural networks. In this paper, we clarify these misconceptions and consider how the EM algorithm can be adopted to train multilayer perceptron (MLP) and mixture of experts (ME) networks in applications to multiclass classification. We identify some situations where the application of the EM algorithm to train MLP networks may be of limited value and discuss some ways of handling the difficulties. For ME networks, it is reported in the literature that networks trained by the EM algorithm using iteratively reweighted least squares (IRLS) algorithm in the inner loop of the M-step, often performed poorly in multiclass classification. However, we found that the convergence of the IRLS algorithm is stable and that the log likelihood is monotonic increasing when a learning rate smaller than one is adopted. Also, we propose the use of an expectation-conditional maximization (ECM) algorithm to train ME networks. Its performance is demonstrated to be superior to the IRLS algorithm on some simulated and real data sets.
Resumo:
Genetic assignment methods use genotype likelihoods to draw inference about where individuals were or were not born, potentially allowing direct, real-time estimates of dispersal. We used simulated data sets to test the power and accuracy of Monte Carlo resampling methods in generating statistical thresholds for identifying F-0 immigrants in populations with ongoing gene flow, and hence for providing direct, real-time estimates of migration rates. The identification of accurate critical values required that resampling methods preserved the linkage disequilibrium deriving from recent generations of immigrants and reflected the sampling variance present in the data set being analysed. A novel Monte Carlo resampling method taking into account these aspects was proposed and its efficiency was evaluated. Power and error were relatively insensitive to the frequency assumed for missing alleles. Power to identify F-0 immigrants was improved by using large sample size (up to about 50 individuals) and by sampling all populations from which migrants may have originated. A combination of plotting genotype likelihoods and calculating mean genotype likelihood ratios (D-LR) appeared to be an effective way to predict whether F-0 immigrants could be identified for a particular pair of populations using a given set of markers.
Resumo:
Eastern curlews Numenius madagascariensis spending the nonbreeding season in eastern Australia foraged on three intertidal decapods: soldier crab Mictyris longicarpus, sentinel crab Macrophthalmus crassipes and ghost-shrimp Trypaea australiensis. Due to their ecology, these crustaceans were spatially segregated (=distributed in 'patches') and the curlews intermittently consumed more than one prey type. It was predicted that if the curlews behaved as intake rate maximizers, the time spent foraging on a particular prey (patch) would reflect relative availabilities of the prey types and thus prey-specific intake rates would be equal. During the mid-nonbreeding period (November-December), Mictyris and Macrophthalmus were primarily consumed and prey-specific intake rates were statistically indistinguishable (8.8 versus 10.1 kJ x min(-1)). Prior to migration (February), Mictyris and Trypaea were hunted and the respective intake rates were significantly different (8.9 versus 2.3 kJ x min(-1)). Time allocation to Trypaea-hunting was independent of the availability of Mictyris. Thus, consumption of Trypaea depressed the overall intake rate. Six hypotheses for consuming Trypaea before migration were examined. Five hypotheses: the possible error by the predator, prey specialization, observer overestimation of time spent hunting Trypaea, supplementary prey and the choice of higher quality prey due to a digestive bottleneck, were deemed unsatisfactory. The explanation for consumption of a low intake-rate but high quality prey (Trypaea) deemed plausible was diet optimisation by the Curlews in response to the pre-migratory modulation (decrease in size/processing capacity) of their digestive system. With a seasonal decrease in the average intake rate, the estimated intake per low tide increased from 1233 to 1508 kJ between the mid-nonbreeding and pre-migratory periods by increasing the overall time spent on the sandflats and the proportion of time spent foraging.
Resumo:
The use of presence/absence data in wildlife management and biological surveys is widespread. There is a growing interest in quantifying the sources of error associated with these data. We show that false-negative errors (failure to record a species when in fact it is present) can have a significant impact on statistical estimation of habitat models using simulated data. Then we introduce an extension of logistic modeling, the zero-inflated binomial (ZIB) model that permits the estimation of the rate of false-negative errors and the correction of estimates of the probability of occurrence for false-negative errors by using repeated. visits to the same site. Our simulations show that even relatively low rates of false negatives bias statistical estimates of habitat effects. The method with three repeated visits eliminates the bias, but estimates are relatively imprecise. Six repeated visits improve precision of estimates to levels comparable to that achieved with conventional statistics in the absence of false-negative errors In general, when error rates are less than or equal to50% greater efficiency is gained by adding more sites, whereas when error rates are >50% it is better to increase the number of repeated visits. We highlight the flexibility of the method with three case studies, clearly demonstrating the effect of false-negative errors for a range of commonly used survey methods.