48 resultados para Error Analysis
Resumo:
The inclusion of collisional rates for He-like Fe and Ca ions is discussed with reference to the analysis of solar flare Fe XXV and Ca XIX line emission, particularly from the Yohkoh Bragg Crystal Spectrometer (BCS). The new data are a slight improvement on calculations presently used in the BCS analysis software in that the discrepancy in the Fe XXV y and z line intensities (observed larger than predicted) is reduced. Values of electron temperature from satellite-to-resonance line ratios are slightly reduced (by up to 1 MK) for a given observed ratio. The new atomic data will be incorporated in the Yohkoh BCS databases. The data should also be of interest for the analysis of high-resolution, non-solar spectra expected from the Constellation-X and Astro-E space missions. A comparison is made of a tokamak S XV spectrum with a synthetic spectrum using atomic data in the existing software and the agreement is found to be good, so validating these data for particularly high-n satellite wavelengths close to the S XV resonance line. An error in a data file used for analyzing BCS Fe XXVI spectra is corrected, so permitting analysis of these spectra.
Resumo:
The results of a study aimed at determining the most important experimental parameters for automated, quantitative analysis of solid dosage form pharmaceuticals (seized and model 'ecstasy' tablets) are reported. Data obtained with a macro-Raman spectrometer were complemented by micro-Raman measurements, which gave information on particle size and provided excellent data for developing statistical models of the sampling errors associated with collecting data as a series of grid points on the tablets' surface. Spectra recorded at single points on the surface of seized MDMA-caffeine-lactose tablets with a Raman microscope (lambda(ex) = 785 nm, 3 mum diameter spot) were typically dominated by one or other of the three components, consistent with Raman mapping data which showed the drug and caffeine microcrystals were ca 40 mum in diameter. Spectra collected with a microscope from eight points on a 200 mum grid were combined and in the resultant spectra the average value of the Raman band intensity ratio used to quantify the MDMA: caffeine ratio, mu(r), was 1.19 with an unacceptably high standard deviation, sigma(r), of 1.20. In contrast, with a conventional macro-Raman system (150 mum spot diameter), combined eight grid point data gave mu(r) = 1.47 with sigma(r) = 0.16. A simple statistical model which could be used to predict sigma(r) under the various conditions used was developed. The model showed that the decrease in sigma(r) on moving to a 150 mum spot was too large to be due entirely to the increased spot diameter but was consistent with the increased sampling volume that arose from a combination of the larger spot size and depth of focus in the macroscopic system. With the macro-Raman system, combining 64 grid points (0.5 mm spacing and 1-2 s accumulation per point) to give a single averaged spectrum for a tablet was found to be a practical balance between minimizing sampling errors and keeping overhead times at an acceptable level. The effectiveness of this sampling strategy was also tested by quantitative analysis of a set of model ecstasy tablets prepared from MDEA-sorbitol (0-30% by mass MDEA). A simple univariate calibration model of averaged 64 point data had R-2 = 0.998 and an r.m.s. standard error of prediction of 1.1% whereas data obtained by sampling just four points on the same tablet showed deviations from the calibration of up to 5%.
Resumo:
In the IEEE 802.11 MAC layer protocol, there are different trade-off points between the number of nodes competing for the medium and the network capacity provided to them. There is also a trade-off between the wireless channel condition during the transmission period and the energy consumption of the nodes. Current approaches at modeling energy consumption in 802.11 based networks do not consider the influence of the channel condition on all types of frames (control and data) in the WLAN. Nor do they consider the effect on the different MAC and PHY schemes that can occur in 802.11 networks. In this paper, we investigate energy consumption corresponding to the number of competing nodes in IEEE 802.11's MAC and PHY layers in error-prone wireless channel conditions, and present a new energy consumption model. Analysis of the power consumed by each type of MAC and PHY over different bit error rates shows that the parameters in these layers play a critical role in determining the overall energy consumption of the ad-hoc network. The goal of this research is not only to compare the energy consumption using exact formulae in saturated IEEE 802.11-based DCF networks under varying numbers of competing nodes, but also, as the results show, to demonstrate that channel errors have a significant impact on the energy consumption.
Resumo:
Rapid, quantitative SERS analysis of nicotine at ppm/ppb levels has been carried out using stable and inexpensive polymer-encapsulated Ag nanoparticles (gel-colls). The strongest nicotine band (1030 cm(-1)) was measured against d(5)-pyridine internal standard (974 cm(-1)) which was introduced during preparation of the stock gel-colls. Calibration plots of I-nic/I-pyr against the concentration of nicotine were non-linear but plotting I-nic/I-pyr against [nicotine](x) (x = 0.6-0.75, depending on the exact experimental conditions) gave linear calibrations over the range (0.1-10 ppm) with R-2 typically ca. 0.998. The RMS prediction error was found to be 0.10 ppm when the gel-colls were used for quantitative determination of unknown nicotine samples in 1-5 ppm level. The main advantages of the method are that the gel-colls constitute a highly stable and reproducible SERS medium that allows high throughput (50 sample h(-1)) measurements.
Resumo:
The least-mean-fourth (LMF) algorithm is known for its fast convergence and lower steady state error, especially in sub-Gaussian noise environments. Recent work on normalised versions of the LMF algorithm has further enhanced its stability and performance in both Gaussian and sub-Gaussian noise environments. For example, the recently developed normalised LMF (XE-NLMF) algorithm is normalised by the mixed signal and error powers, and weighted by a fixed mixed-power parameter. Unfortunately, this algorithm depends on the selection of this mixing parameter. In this work, a time-varying mixed-power parameter technique is introduced to overcome this dependency. A convergence analysis, transient analysis, and steady-state behaviour of the proposed algorithm are derived and verified through simulations. An enhancement in performance is obtained through the use of this technique in two different scenarios. Moreover, the tracking analysis of the proposed algorithm is carried out in the presence of two sources of nonstationarities: (1) carrier frequency offset between transmitter and receiver and (2) random variations in the environment. Close agreement between analysis and simulation results is obtained. The results show that, unlike in the stationary case, the steady-state excess mean-square error is not a monotonically increasing function of the step size. (c) 2007 Elsevier B.V. All rights reserved.
Resumo:
This letter derives mathematical expressions for the received signal-to-interference-plus-noise ratio (SINR) of uplink Single Carrier (SC) Frequency Division Multiple Access (FDMA) multiuser MIMO systems. An improved frequency domain receiver algorithm is derived for the studied systems, and is shown to be significantly superior to the conventional linear MMSE based receiver in terms of SINR and bit error rate (BER) performance.
Resumo:
This paper introduces a new technique for palmprint recognition based on Fisher Linear Discriminant Analysis (FLDA) and Gabor filter bank. This method involves convolving a palmprint image with a bank of Gabor filters at different scales and rotations for robust palmprint features extraction. Once these features are extracted, FLDA is applied for dimensionality reduction and class separability. Since the palmprint features are derived from the principal lines, wrinkles and texture along the palm area. One should carefully consider this fact when selecting the appropriate palm region for the feature extraction process in order to enhance recognition accuracy. To address this problem, an improved region of interest (ROI) extraction algorithm is introduced. This algorithm allows for an efficient extraction of the whole palm area by ignoring all the undesirable parts, such as the fingers and background. Experiments have shown that the proposed method yields attractive performances as evidenced by an Equal Error Rate (EER) of 0.03%.
Resumo:
Context. Hot-Jupiter planets must form at large separations from their host stars where the temperatures are cool enough for their cores to condense. They then migrate inwards to their current observed orbital separations. Different theories of how this migration occurs lead to varying distributions of orbital eccentricity and the alignment between the rotation axis of the star and the orbital axis of the planet. Aims: The spin-orbit alignment of a transiting system is revealed via the Rossiter-McLaughlin effect, which is the anomaly present in the radial velocity measurements of the rotating star during transit due to the planet blocking some of the starlight. In this paper we aim to measure the spin-orbit alignment of the WASP-3 system via a new way of analysing the Rossiter-McLaughlin observations. Methods: We apply a new tomographic method for analysing the time variable asymmetry of stellar line profiles caused by the Rossiter-McLaughlin effect. This new method eliminates the systematic error inherent in previous methods used to analyse the effect. Results: We find a value for the projected stellar spin rate of v sin i = 13.9 ± 0.03 km s-1 which is in agreement with previous measurements but has a much higher precision. The system is found to be well aligned, with ? = 5-5+6° which favours an evolutionary history for WASP-3b involving migration through tidal interactions with a protoplanetary disc. From comparison with isochrones we put an upper limit on the age of the star of 2 Gyr.
Resumo:
Microsatellite genotyping is a common DNA characterization technique in population, ecological and evolutionary genetics research. Since different alleles are sized relative to internal size-standards, different laboratories must calibrate and standardize allelic designations when exchanging data. This interchange of microsatellite data can often prove problematic. Here, 16 microsatellite loci were calibrated and standardized for the Atlantic salmon, Salmo salar, across 12 laboratories. Although inconsistencies were observed, particularly due to differences between migration of DNA fragments and actual allelic size ('size shifts'), inter-laboratory calibration was successful. Standardization also allowed an assessment of the degree and partitioning of genotyping error. Notably, the global allelic error rate was reduced from 0.05 ± 0.01 prior to calibration to 0.01 ± 0.002 post-calibration. Most errors were found to occur during analysis (i.e. when size-calling alleles; the mean proportion of all errors that were analytical errors across loci was 0.58 after calibration). No evidence was found of an association between the degree of error and allelic size range of a locus, number of alleles, nor repeat type, nor was there evidence that genotyping errors were more prevalent when a laboratory analyzed samples outside of the usual geographic area they encounter. The microsatellite calibration between laboratories presented here will be especially important for genetic assignment of marine-caught Atlantic salmon, enabling analysis of marine mortality, a major factor in the observed declines of this highly valued species.
Resumo:
A novel method of obtaining high-quality Raman spectra of luminescent samples was tested using cyclohexane solutions which had been treated with a fluorescent dye. The method involves removing the fixed pattern irregularity found in the spectra taken with CCD detectors by subtracting spectra taken at several different, closely spaced spectrometer positions. It is conceptually similar to SERDS (shifted excitation Raman difference spectroscopy) but has the distinct experimental advantage that it does not require a tunable laser source. The subtracted spectra obtained as the raw data are converted into a more recognisable and conventional form by iterative fitting of appropriate double Lorentzian functions whose peak parameters are then used to 'reconstruct' a conventional representation of the spectrum. Importantly, it is shown that the degree of uncertainty in the resultant 'reconstructed' spectra can be gauged reliably by comparing reconstructed spectra obtained at two different spectrometer shifts (delta and 2 delta), The method was illustrated and validated using a solvent (cyclohexane) the spectrum of which is well known and which contains both regions with complex overlapping bands and regions with isolated bands, Possible sources of error are discussed and it is shown that, provided the degree of uncertainty in the data is correctly characterised, it is completely valid to draw conclusions about the spectra of the sample on the basis of the reconstructed data. The acronym SSRS (subtracted shifted Raman spectroscopy; pronounced scissors) is proposed for this method, to distinguish it from the SERDS technique.
Resumo:
In this paper we present an empirical analysis of the residential demand for electricity using annual aggregate data at the state level for 48 US states from 1995 to 2007. Earlier literature has examined residential energy consumption at the state level using annual or monthly data, focusing on the variation in price elasticities of demand across states or regions, but has failed to recognize or address two major issues. The first is that, when fitting dynamic panel models, the lagged consumption term in the right-hand side of the demand equation is endogenous. This has resulted in potentially inconsistent estimates of the long-run price elasticity of demand. The second is that energy price is likely mismeasured.
Resumo:
Many of the most interesting questions ecologists ask lead to analyses of spatial data. Yet, perhaps confused by the large number of statistical models and fitting methods available, many ecologists seem to believe this is best left to specialists. Here, we describe the issues that need consideration when analysing spatial data and illustrate these using simulation studies. Our comparative analysis involves using methods including generalized least squares, spatial filters, wavelet revised models, conditional autoregressive models and generalized additive mixed models to estimate regression coefficients from synthetic but realistic data sets, including some which violate standard regression assumptions. We assess the performance of each method using two measures and using statistical error rates for model selection. Methods that performed well included generalized least squares family of models and a Bayesian implementation of the conditional auto-regressive model. Ordinary least squares also performed adequately in the absence of model selection, but had poorly controlled Type I error rates and so did not show the improvements in performance under model selection when using the above methods. Removing large-scale spatial trends in the response led to poor performance. These are empirical results; hence extrapolation of these findings to other situations should be performed cautiously. Nevertheless, our simulation-based approach provides much stronger evidence for comparative analysis than assessments based on single or small numbers of data sets, and should be considered a necessary foundation for statements of this type in future.
Resumo:
A study was undertaken to examine a range of sample preparation and near infrared reflectance spectroscopy (NIPS) methodologies, using undried samples, for predicting organic matter digestibility (OMD g kg(-1)) and ad libitum intake (g kg(-1) W-0.75) of grass silages. A total of eight sample preparation/NIRS scanning methods were examined involving three extents of silage comminution, two liquid extracts and scanning via either external probe (1100-2200 nm) or internal cell (1100-2500 nm). The spectral data (log 1/R) for each of the eight methods were examined by three regression techniques each with a range of data transformations. The 136 silages used in the study were obtained from farms across Northern Ireland, over a two year period, and had in vivo OMD (sheep) and ad libitum intake (cattle) determined under uniform conditions. In the comparisons of the eight sample preparation/scanning methods, and the differing mathematical treatments of the spectral data, the sample population was divided into calibration (n = 91) and validation (n = 45) sets. The standard error of performance (SEP) on the validation set was used in comparisons of prediction accuracy. Across all 8 sample preparation/scanning methods, the modified partial least squares (MPLS) technique, generally minimized SEP's for both OMD and intake. The accuracy of prediction also increased with degree of comminution of the forage and with scanning by internal cell rather than external probe. The system providing the lowest SEP used the MPLS regression technique on spectra from the finely milled material scanned through the internal cell. This resulted in SEP and R-2 (variance accounted for in validation set) values of 24 (g/kg OM) and 0.88 (OMD) and 5.37 (g/kg W-0.75) and 0.77 (intake) respectively. These data indicate that with appropriate techniques NIRS scanning of undried samples of grass silage can produce predictions of intake and digestibility with accuracies similar to those achieved previously using NIRS with dried samples. (C) 1998 Elsevier Science B.V.
Resumo:
A joint concern with multidimensionality and dynamics is a defining feature of the pervasive use of the terminology of social exclusion in the European Union. The notion of social exclusion focuses attention on economic vulnerability in the sense of exposure to risk and uncertainty. Sociological concern with these issues has been associated with the thesis that risk and uncertainty have become more pervasive and extend substantially beyond the working class. This paper combines features of recent approaches to statistical modelling of poverty dynamics and multidimensional deprivation in order to develop our understanding of the dynamics of economic vulnerability. An analysis involving nine countries and covering the first five waves of the European Community Household Panel shows that, across nations and time, it is possible to identify an economically vulnerable class. This class is characterized by heightened risk of falling below a critical resource level, exposure to material deprivation and experience of subjective economic stress. Cross-national differentials in persistence of vulnerability are wider than in the case of income poverty and less affected by measurement error. Economic vulnerability profiles vary across welfare regimes in a manner broadly consistent with our expectations. Variation in the impact of social class within and across countries provides no support for the argument that its role in structuring such risk has become much less important. Our findings suggest that it is possible to accept the importance of the emergence of new forms of social risk and acknowledge the significance of efforts to develop welfare states policies involving a shift of opportunities and decision making on to individuals without accepting the 'death of social class' thesis.
Resumo:
Multiuser diversity gain has been investigated well in terms of a system capacity formulation in the literature. In practice, however, designs on multiuser systems with nonzero error rates require a relationship between the error rates and the number of users within a cell. Considering a best-user scheduling, where the user with the best channel condition is scheduled to transmit per scheduling interval, our focus is on the uplink. We assume that each user communicates with the base station through a single-input multiple-output channel. We derive a closed-form expression for the average BER, and analyze how the average BER goes to zero asymptotically as the number of users increases for a given SNR. Note that the analysis of average BER even in SI SO multiuser diversity systems has not been done with respect to the number of users for a given SNR. Our analysis can be applied to multiuser diversity systems with any number of antennas.