63 resultados para errors and erasures decoding
Resumo:
Context. The formation and evolution of the Galactic bulge and its relationship with the other Galactic populations is still poorly understood. Aims. To establish the chemical differences and similarities between the bulge and other stellar populations, we performed an elemental abundance analysis of alpha- (O, Mg, Si, Ca, and Ti) and Z-odd (Na and Al) elements of red giant stars in the bulge as well as of local thin disk, thick disk and halo giants. Methods. We use high-resolution optical spectra of 25 bulge giants in Baade's window and 55 comparison giants (4 halo, 29 thin disk and 22 thick disk giants) in the solar neighborhood. All stars have similar stellar parameters but cover a broad range in metallicity (-1.5 < [Fe/H] < +0.5). A standard 1D local thermodynamic equilibrium analysis using both Kurucz and MARCS models yielded the abundances of O, Na, Mg, Al, Si, Ca, Ti and Fe. Our homogeneous and differential analysis of the Galactic stellar populations ensured that systematic errors were minimized. Results. We confirm the well-established differences for [alpha/Fe] at a given metallicity between the local thin and thick disks. For all the elements investigated, we find no chemical distinction between the bulge and the local thick disk, in agreement with our previous study of C, N and O but in contrast to other groups relying on literature values for nearby disk dwarf stars. For -1.5 < [Fe/H] < -0.3 exactly the same trend is followed by both the bulge and thick disk stars, with a star-to-star scatter of only 0.03 dex. Furthermore, both populations share the location of the knee in the [alpha/Fe] vs. [Fe/H] diagram. It still remains to be confirmed that the local thick disk extends to super-solar metallicities as is the case for the bulge. These are the most stringent constraints to date on the chemical similarity of these stellar populations. Conclusions. Our findings suggest that the bulge and local thick disk stars experienced similar formation timescales, star formation rates and initial mass functions, confirming thus the main outcomes of our previous homogeneous analysis of [O/Fe] from infrared spectra for nearly the same sample. The identical a-enhancements of thick disk and bulge stars may reflect a rapid chemical evolution taking place before the bulge and thick disk structures we see today were formed, or it may reflect Galactic orbital migration of inner disk/bulge stars resulting in stars in the solar neighborhood with thick-disk kinematics.
Resumo:
The mechanism of incoherent pi(0) and eta photoproduction from complex nuclei is investigated from 4 to 12 GeV with an extended version of the multicollisional Monte Carlo (MCMC) intranuclear cascade model. The calculations take into account the elementary photoproduction amplitudes via a Regge model and the nuclear effects of photon shadowing, Pauli blocking, and meson-nucleus final-state interactions. The results for pi(0) photoproduction reproduced for the first time the magnitude and energy dependence of the measured rations sigma(gamma A)/sigma(gamma N) for several nuclei (Be, C, Al, Cu, Ag, and Pb) from a Cornell experiment. The results for eta photoproduction fitted the inelastic background in Cornell's yields remarkably well, which is clearly not isotropic as previously considered in Cornell's analysis. With this constraint for the background, the eta -> gamma gamma. decay width was extracted using the Primakoff method, combining Be and Cu data [Gamma(eta ->gamma gamma) = 0.476(62) keV] and using Be data only [Gamma(eta ->gamma gamma) = 0.512(90) keV]; where the errors are only statistical. These results are in sharp contrast (similar to 50-60%) with the value reported by the Cornell group [Gamma(eta ->gamma gamma). = 0.324(46) keV] and in line with the Particle Data Group average of 0.510(26) keV.
Resumo:
Background: There are several studies in the literature depicting measurement error in gene expression data and also, several others about regulatory network models. However, only a little fraction describes a combination of measurement error in mathematical regulatory networks and shows how to identify these networks under different rates of noise. Results: This article investigates the effects of measurement error on the estimation of the parameters in regulatory networks. Simulation studies indicate that, in both time series (dependent) and non-time series (independent) data, the measurement error strongly affects the estimated parameters of the regulatory network models, biasing them as predicted by the theory. Moreover, when testing the parameters of the regulatory network models, p-values computed by ignoring the measurement error are not reliable, since the rate of false positives are not controlled under the null hypothesis. In order to overcome these problems, we present an improved version of the Ordinary Least Square estimator in independent (regression models) and dependent (autoregressive models) data when the variables are subject to noises. Moreover, measurement error estimation procedures for microarrays are also described. Simulation results also show that both corrected methods perform better than the standard ones (i.e., ignoring measurement error). The proposed methodologies are illustrated using microarray data from lung cancer patients and mouse liver time series data. Conclusions: Measurement error dangerously affects the identification of regulatory network models, thus, they must be reduced or taken into account in order to avoid erroneous conclusions. This could be one of the reasons for high biological false positive rates identified in actual regulatory network models.
Resumo:
A multicenter descriptive study was carried out in two steps: an interview with providers involved in the medication processes, and then non-participating observation of their environment and practices. Only one hospital was found to have a bar-coding, dispensing system connected to a computerized prescription system. fit all participating hospitals at least 90% of the drugs were dispensed and distributed as unit doses, but in none of them did pharmacists assess prescriptions. The study findings showed that the processes of drug dispensing and distribution in Brazilian hospitals encounter several problems, mostly associated to work environment conditions and inadequacy in drug ordering and requests.
Resumo:
Medication administration errors (MAE) are the most frequent kind of medication errors. Errors with antimicrobial drugs (AD) are relevant because they may interfere inpatient safety and in the development of microbial resistance. The aim of this study is to analyze the AD errors detected in a Brazilian multicentric study of MAE. It was a devcriptive and explorotory study carried out in clinical units in five Brazilian teaching hospitals. The hospitals were investigated during 30 days. MAE were detected by observation technique. MAE were classified in categories: wrong route(WR), wrong patient(WP), wrong dose(WD) wrong time (WT) and unordered drug (UD). AD with MA E were classified by Anatomical-Therapeutical-Chemical Classification System. AD with narrow therapeutic index (NTI) wet-e identified A descriptive statistical analysis was performed using SPSS version 11.5 software. A total of 1500 errors were observed, 277 (18.5%) of them were error with AD. The hopes of AD error were: WT87.7%, QD 6.9%, WR 1.5%, UD 3.2% and WP 0.7%. The number of AD found was 36. The mostly ATC class were fluoroquinolones 13.9%, combinations of penicillin 13.9%, macrolides 8.3% and third-generation cephalosporines 5.6%. The parenteral drug dosage form was associated with 55.6% of AD. 16.7% of AD were NTI. 47.4% of WD and 21.8% WT were with NTI drugs. This study shows that these errors should be considered potential areas for improvement in the medication process and patient safety plus there is requirement to develop rational drug use of AD.
Resumo:
In this study, the innovation approach is used to estimate the measurement total error associated with power system state estimation. This is required because the power system equations are very much correlated with each other and as a consequence part of the measurements errors is masked. For that purpose an index, innovation index (II), which provides the quantity of new information a measurement contains is proposed. A critical measurement is the limit case of a measurement with low II, it has a zero II index and its error is totally masked. In other words, that measurement does not bring any innovation for the gross error test. Using the II of a measurement, the masked gross error by the state estimation is recovered; then the total gross error of that measurement is composed. Instead of the classical normalised measurement residual amplitude, the corresponding normalised composed measurement residual amplitude is used in the gross error detection and identification test, but with m degrees of freedom. The gross error processing turns out to be very simple to implement, requiring only few adaptations to the existing state estimation software. The IEEE-14 bus system is used to validate the proposed gross error detection and identification test.
Resumo:
This paper develops H(infinity) control designs based on neural networks for fully actuated and underactuated cooperative manipulators. The neural networks proposed in this paper only adapt the uncertain dynamics of the robot manipulators. They work as a complement of the nominal model. The H(infinity) performance index includes the position errors as well the squeeze force errors between the manipulator end-effectors and the object, which represents a complete disturbance rejection scenario. For the underactuated case, the squeeze force control problem is more difficult to solve due to the loss of some degrees of manipulator actuation. Results obtained from an actual cooperative manipulator, which is able to work as a fully actuated and an underactuated manipulator, are presented. (C) 2008 Elsevier Ltd. All rights reserved.
Resumo:
This work presents an automated system for the measurement of form errors of mechanical components using an industrial robot. A three-probe error separation technique was employed to allow decoupling between the measured form error and errors introduced by the robotic system. A mathematical model of the measuring system was developed to provide inspection results by means of the solution of a system of linear equations. A new self-calibration procedure, which employs redundant data from several runs, minimizes the influence of probes zero-adjustment on the final result. Experimental tests applied to the measurement of straightness errors of mechanical components were accomplished and demonstrated the effectiveness of the employed methodology. (C) 2007 Elsevier Ltd. All rights reserved.
Resumo:
With the relentless quest for improved performance driving ever tighter tolerances for manufacturing, machine tools are sometimes unable to meet the desired requirements. One option to improve the tolerances of machine tools is to compensate for their errors. Among all possible sources of machine tool error, thermally induced errors are, in general for newer machines, the most important. The present work demonstrates the evaluation and modelling of the behaviour of the thermal errors of a CNC cylindrical grinding machine during its warm-up period.
Resumo:
We describe a one-time signature scheme based on the hardness of the syndrome decoding problem, and prove it secure in the random oracle model. Our proposal can be instantiated on general linear error correcting codes, rather than restricted families like alternant codes for which a decoding trapdoor is known to exist. (C) 2010 Elsevier Inc. All rights reserved,
Resumo:
Three-dimensional modeling of piezoelectric devices requires a precise knowledge of piezoelectric material parameters. The commonly used piezoelectric materials belong to the 6mm symmetry class, which have ten independent constants. In this work, a methodology to obtain precise material constants over a wide frequency band through finite element analysis of a piezoceramic disk is presented. Given an experimental electrical impedance curve and a first estimate for the piezoelectric material properties, the objective is to find the material properties that minimize the difference between the electrical impedance calculated by the finite element method and that obtained experimentally by an electrical impedance analyzer. The methodology consists of four basic steps: experimental measurement, identification of vibration modes and their sensitivity to material constants, a preliminary identification algorithm, and final refinement of the material constants using an optimization algorithm. The application of the methodology is exemplified using a hard lead zirconate titanate piezoceramic. The same methodology is applied to a soft piezoceramic. The errors in the identification of each parameter are statistically estimated in both cases, and are less than 0.6% for elastic constants, and less than 6.3% for dielectric and piezoelectric constants.
Resumo:
Higher order (2,4) FDTD schemes used for numerical solutions of Maxwell`s equations are focused on diminishing the truncation errors caused by the Taylor series expansion of the spatial derivatives. These schemes use a larger computational stencil, which generally makes use of the two constant coefficients, C-1 and C-2, for the four-point central-difference operators. In this paper we propose a novel way to diminish these truncation errors, in order to obtain more accurate numerical solutions of Maxwell`s equations. For such purpose, we present a method to individually optimize the pair of coefficients, C-1 and C-2, based on any desired grid size resolution and size of time step. Particularly, we are interested in using coarser grid discretizations to be able to simulate electrically large domains. The results of our optimization algorithm show a significant reduction in dispersion error and numerical anisotropy for all modeled grid size resolutions. Numerical simulations of free-space propagation verifies the very promising theoretical results. The model is also shown to perform well in more complex, realistic scenarios.
Resumo:
This work studies the turbo decoding of Reed-Solomon codes in QAM modulation schemes for additive white Gaussian noise channels (AWGN) by using a geometric approach. Considering the relations between the Galois field elements of the Reed-Solomon code and the symbols combined with their geometric dispositions in the QAM constellation, a turbo decoding algorithm, based on the work of Chase and Pyndiah, is developed. Simulation results show that the performance achieved is similar to the one obtained with the pragmatic approach with binary decomposition and analysis.
Resumo:
The knowledge of soil water storage (SWS) of soil profiles is crucial for the adoption of vegetation restoration practices. With the aim of identifying representative sites to obtain the mean SWS of a watershed, a time stability analysis of neutron probe evaluations of SWS was performed by the means of relative differences and Spearman rank correlation coefficients. At the same time, the effects of different neutron probe calibration procedures were explored on time stability analysis. mean SWS estimation. and preservation of the spatial variability of SWS. The selected watershed, with deep gullies and undulating slopes which cover an area of 20 ha, is characterized by an Ust-Sandiic Entisol and an Aeolian sandy soil. The dominant vegetation species are bunge needlegrass (Stipa bungeana Trim) and korshinsk peashrub (Carugano Korshinskii kom.). From June 11, 2007 to July 23,2008, SWS of the top1 m soil layer was evaluated for 20 dates, based on neutron probe data of 12 sampling sites. Three calibration procedures were employed: type 1, most complete, with each site having its own linear calibration equation (TrE); type II. with TrE equations extended over the whole field: and type III, with one single linear calibration curve for the whole field (UnE) and also correcting its intercept based on site specific relative difference analysis (RdE) and on linear fitting of data (RcE), both maintaining the same slope. A strong time stability of SWS estimated by TrE equations was identified. Soil particle size and soil organic matter content were recognized as the influencing factors for spatial variability of SWS. Land use influenced neither the spatial variability nor the time stability of SWS. Time stability analysis identified one site to represent the mean SWS of the whole watershed with mean absolute percentage errors of less than 10%, therefore. this site can be used as a predictor for the mean SWS of the watershed. Some equations of type II were found to be unsatisfactory to yield reliable mean SWS values or in preserving the associated soil spatial variability. Hence, it is recommended to be cautious in extending calibration equations to other sites since they might not consider the field variability. For the equations with corrected intercept (type III), which consider the spatial variability of calibration in a different way in relation to TrE, it was found that they can yield satisfactory means and standard deviation of SWS, except for the RdE equations, which largely leveled off the SWS values in the watershed. Correlation analysis showed that the neutron probe calibration was linked to soil bulk density and to organic matter content. Therefore, spatial variability of soil properties should be taken into account during the process of neutron probe calibration. This study provides useful information on the mean SWS observation with a time stable site and on distinct neutron probe calibration procedures, and it should be extended to soil water management studies with neutron probes, e.g., the process of vegetation restoration in wider area and soil types of the Loess Plateau in China. (C) 2009 Elsevier B.V. All rights reserved.
Resumo:
The purpose of this study was the development and validation of an LC-MS-MS method for simultaneous analysis of ibuprofen (IBP), 2-hydroxyibuprofen (2-OH-IBP) enantiomers, and carboxyibuprofen (COOH-IBP) stereoisomers in fungi culture medium, to investigate the ability of some endophytic fungi to biotransform the chiral drug IBP into its metabolites. Resolution of IBP and the stereoisomers of its main metabolites was achieved by use of a Chiralpak AS-H column (150 x 4.6 mm, 5 mu m particle size), column temperature 8 degrees C, and the mobile phase hexane-isopropanol-trifluoroacetic acid (95: 5: 0.1, v/v) at a flow rate of 1.2 mL min(-1). Post-column infusion with 10 mmol L(-1) ammonium acetate in methanol at a flow rate of 0.3 mL min(-1) was performed to enhance MS detection (positive electrospray ionization). Liquid-liquid extraction was used for sample preparation with hexane-ethyl acetate (1:1, v/v) as extraction solvent. Linearity was obtained in the range 0.1-20 mu g mL(-1) for IBP, 0.05-7.5 mu g mL(-1) for each 2-OH-IBP enantiomer, and 0.025-5.0 mu g mL(-1) for each COOH-IBP stereoisomer (r >= 0.99). The coefficients of variation and relative errors obtained in precision and accuracy studies (within-day and between-day) were below 15%. The stability studies showed that the samples were stable (p > 0.05) during freeze and thaw cycles, short-term exposure to room temperature, storage at -20 degrees C, and biotransformation conditions. Among the six fungi studied, only the strains Nigrospora sphaerica (SS67) and Chaetomium globosum (VR10) biotransformed IBP enantioselectively, with greater formation of the metabolite (+)-(S)-2-OH-IBP. Formation of the COOH-IBP stereoisomers, which involves hydroxylation at C3 and further oxidation to form the carboxyl group, was not observed.