874 resultados para analytical error
Resumo:
When dealing with sustainability we are concerned with the biophysical as well as the monetary aspects of economic and ecological interactions. This multidimensional approach requires that special attention is given to dimensional issues in relation to curve fitting practice in economics. Unfortunately, many empirical and theoretical studies in economics, as well as in ecological economics, apply dimensional numbers in exponential or logarithmic functions. We show that it is an analytical error to put a dimensional unit x into exponential functions ( a x ) and logarithmic functions ( x a log ). Secondly, we investigate the conditions of data sets under which a particular logarithmic specification is superior to the usual regression specification. This analysis shows that logarithmic specification superiority in terms of least square norm is heavily dependent on the available data set. The last section deals with economists’ “curve fitting fetishism”. We propose that a distinction be made between curve fitting over past observations and the development of a theoretical or empirical law capable of maintaining its fitting power for any future observations. Finally we conclude this paper with several epistemological issues in relation to dimensions and curve fitting practice in economics
Resumo:
This paper is to examine the proper use of dimensions and curve fitting practices elaborating on Georgescu-Roegen’s economic methodology in relation to the three main concerns of his epistemological orientation. Section 2 introduces two critical issues in relation to dimensions and curve fitting practices in economics in view of Georgescu-Roegen’s economic methodology. Section 3 deals with the logarithmic function (ln z) and shows that z must be a dimensionless pure number, otherwise it is nonsensical. Several unfortunate examples of this analytical error are presented including macroeconomic data analysis conducted by a representative figure in this field. Section 4 deals with the standard Cobb-Douglas function. It is shown that the operational meaning cannot be obtained for capital or labor within the Cobb-Douglas function. Section 4 also deals with economists "curve fitting fetishism". Section 5 concludes thispaper with several epistemological issues in relation to dimensions and curve fitting practices in economics.
Resumo:
Multipliers are routinely used for impact evaluation of private projects and public policies at the national and subnational levels. Oosterhaven and Stelder (2002) correctly pointed out the misuse of standard 'gross' multipliers and proposed the concept of 'net' multiplier as a solution to this bad practice. We prove their proposal is not well founded. We do so by showing that supporting theorems are faulty in enunciation and demonstration. The proofs are flawed due to an analytical error but the theorems themselves cannot be salvaged as generic, non-curiosum counterexamples demonstrate. We also provide a general analytical framework for multipliers and, using it, we show that standard 'gross' multipliers are all that is needed within the interindustry model since they follow the causal logic of the economic model, are well defined and independent of exogenous shocks, and are interpretable as predictors for change.
Resumo:
Triple quadrupole mass spectrometers coupled with high performance liquid chromatography are workhorses in quantitative bioanalyses. It provides substantial benefits including reproducibility, sensitivity and selectivity for trace analysis. Selected Reaction Monitoring allows targeted assay development but data sets generated contain very limited information. Data mining and analysis of non-targeted high-resolution mass spectrometry profiles of biological samples offer the opportunity to perform more exhaustive assessments, including quantitative and qualitative analysis. The objectives of this study was to test method precision and accuracy, statistically compare bupivacaine drug concentration in real study samples and verify if high resolution and accurate mass data collected in scan mode can actually permit retrospective data analysis, more specifically, extract metabolite related information. The precision and accuracy data presented using both instruments provided equivalent results. Overall, the accuracy was ranging from 106.2 to 113.2% and the precision observed was from 1.0 to 3.7%. Statistical comparisons using a linear regression between both methods reveal a coefficient of determination (R2) of 0.9996 and a slope of 1.02 demonstrating a very strong correlation between both methods. Individual sample comparison showed differences from -4.5% to 1.6% well within the accepted analytical error. Moreover, post acquisition extracted ion chromatograms at m/z 233.1648 ± 5 ppm (M-56) and m/z 305.2224 ± 5 ppm (M+16) revealed the presence of desbutyl-bupivacaine and three distinct hydroxylated bupivacaine metabolites. Post acquisition analysis allowed us to produce semiquantitative evaluations of the concentration-time profiles for bupicavaine metabolites.
Resumo:
This paper provides an overview of dust transport pathways and concentrations over the Arabian Sea during 1995. Results indicate that the transport and input of dust to the region is complex, being affected by both temporally and spatially important processes. Highest values of dust were found off the Omani coast and in the entrance to the Gulf of Oman. Dust levels were generally lower in summer than the other seasons, although still relatively high compared to other oceanic regions. The Findlater jet, rather than acting as a source of dust from Africa, appears to block the direct transport of dust to the open Arabian Sea from desert dust source regions in the Middle East and Iran/Pakistan. Dust transport aloft, above the jet, rather than at the surface, may be more important during summer. In an opposite pattern to dust, sea salt levels were exceedingly high during the summer monsoon, presumably due to the sustained strong surface winds. The high sea salt aerosols during the summer months may be impacting on the strong aerosol reflectance and absorbance signals over the Arabian Sea that are detected by satellite each year.
Resumo:
Quartz crystals in sandstones at depths of 1200 m–1400 m below the surface appear to reach a solubility equilibrium with the 4He-concentration in the surrounding pore- or groundwater after some time. A rather high 4Heconcentration of 4.5x10E-3 cc STP 4He/cm3 of water measured in a groundwater sample would for instance maintain a He pressure of 0.47 atm in a related volume. This value is equal within analytical error to the pressure deduced from the measured helium content of the quartz and its internal helium-accessible volume. To determine this volume, quartz crystals of 0.1 to 1 mm were separated from sandstones and exposed to a helium gas pressure of 32 atm at a temperature of 290°C for up to 2 months. By crushing, melting or isothermal heating the helium was then extracted from the helium saturated samples. Avolume on the order of 0.1% of the crystal volume is only accessible to helium atoms but not to argon atoms or water molecules. By monitoring the diffusive loss of He from the crystals at 350°C an effective diffusion constant on the order of 10E-9 cm2/s is estimated. Extrapolation to the temperature of 70°C in the sediments at a depth of 1400 m gives a typical time of about 100 000 years to reach equilibrium between helium in porewaters and the internal He-accessible volume of quartz crystals. In a geologic situation with stagnant pore- or groundwaters in sediments it therefore appears to be possible with this new method to deduce a 4He depth profile for porewaters in impermeable rocks based on their mineral record.
Resumo:
We present high-spatial resolution secondary ion mass spectrometry (SIMS) measurements of Pb and S isotopes in sulphides from early Archaean samples at two localities in southwest Greenland. Secondary pyrite from a 3.71 Ga sample of magnetite-quartz banded iron formation in the Isua Greenstone Belt, which has previously yielded unradiogenic Pb consistent with its ancient origin, contains sulphur with a mass independently fractionated (MIF) isotope signature (Delta(33)S =+3.3 parts per thousand). This reflects the secondary mineralization of remobilized sedimentary S carrying a component modified by photochemical reactions in the early Archaean atmosphere. It further represents one of the most extreme positive excursions so far known from the early Archaean rock record. Sulphides from a quartz-pyroxene rock and an ultramafic boudin from the island of Akilia, in the Godth (a) over circle bsfjord, have heterogeneous and generally radiogenic Pb isotopic compositions that we interpret to represent partial re-equilibration of Pb between the sulphides and whole rocks during tectonothermal events at 3.6, 2.7 and 1.6 Ga. Both these samples have Delta(33)S=0 (within analytical error) and therefore show no evidence for MIF sulphur. These data are consistent with previous interpretations that the rock cannot be proven to have a sedimentary origin. Our study illustrates that SIMS S-isotope measurements in ancient rocks can be used to elucidate early atmospheric parameters because of the ability to obtain combined S and Pb-isotope data, but caution must be applied when using such data to infer protolith. When information from geological context, petrography and chronology (i.e. by Pb isotopes) is combined and fully evaluated, Delta(33)S signatures from sulphides and their geological significance can be interpreted with a higher degree of confidence. (c) 2005 Elsevier B.V All rights reserved.
Resumo:
The principles of High Performance Liquid Chromatography (HPLC) and pharmacokinetics were applied to the use of several clinically-important drugs at the East Birmingham Hospital. Amongst these was gentamicin, which was investigated over a two-year period by a multi-disciplinary team. It was found that there was considerable intra- and inter-patient variation that had not previously been reported and the causes and consequences of such variation were considered. A detailed evaluation of available pharmacokinetic techniques was undertaken and 1- and 2-compartment models were optimised with regard to sampling procedures, analytical error and model-error. The implications for control of therapy are discussed and an improved sampling regime is proposed for routine usage. Similar techniques were applied to trimethoprim, assayed by HPLC, in patients with normal renal function and investigations were also commenced into the penetration of drug into peritoneal dialysate. Novel assay techniques were also developed for a range of drugs including 4-aminopyridine, chloramphenicol, metronidazole and a series of penicillins and cephalosporins. Stability studies on cysteamine, reaction-rate studies on creatinine-picrate and structure-activity relationships in HPLC of aminopyridines are also reported.
Resumo:
Free and "bound" long-chain alkenones (C37?2 and C37?3) in oxidized and unoxidized sections of four organic matter-rich Pliocene and Miocene Madeira Abyssal Plain turbidites (one from Ocean Drilling Program site 951B and three from site 952A) were analyzed to determine the effect of severe post depositional oxidation on the value of Uk'37. The profiles of both alkenones across the redox boundary show a preferential degradation of the C37?3 compared to the C37?2 compound. Because of the high initial Uk'37 values and the way of calculating the Uk'37 this degradation hardly influences the Uk'37 profiles. However, for lower Uk'37 values, measured selective degradation would increase Uk'37 up to 0.17 units, equivalent to 5°C. For most of the Uk'37 band-width, much smaller degradation already increases Uk'37 beyond the analytical error (0.017 units). Consequently, for interpreting the Uk'37 record in terms of past sea surface temperatures, selective degradation needs serious consideration.
Resumo:
Background: Genome wide association studies (GWAS) are becoming the approach of choice to identify genetic determinants of complex phenotypes and common diseases. The astonishing amount of generated data and the use of distinct genotyping platforms with variable genomic coverage are still analytical challenges. Imputation algorithms combine directly genotyped markers information with haplotypic structure for the population of interest for the inference of a badly genotyped or missing marker and are considered a near zero cost approach to allow the comparison and combination of data generated in different studies. Several reports stated that imputed markers have an overall acceptable accuracy but no published report has performed a pair wise comparison of imputed and empiric association statistics of a complete set of GWAS markers. Results: In this report we identified a total of 73 imputed markers that yielded a nominally statistically significant association at P < 10(-5) for type 2 Diabetes Mellitus and compared them with results obtained based on empirical allelic frequencies. Interestingly, despite their overall high correlation, association statistics based on imputed frequencies were discordant in 35 of the 73 (47%) associated markers, considerably inflating the type I error rate of imputed markers. We comprehensively tested several quality thresholds, the haplotypic structure underlying imputed markers and the use of flanking markers as predictors of inaccurate association statistics derived from imputed markers. Conclusions: Our results suggest that association statistics from imputed markers showing specific MAF (Minor Allele Frequencies) range, located in weak linkage disequilibrium blocks or strongly deviating from local patterns of association are prone to have inflated false positive association signals. The present study highlights the potential of imputation procedures and proposes simple procedures for selecting the best imputed markers for follow-up genotyping studies.
Resumo:
Extensive ab initio calculations using a complete active space second-order perturbation theory wavefunction, including scalar and spin-orbit relativistic effects with a quadruple-zeta quality basis set were used to construct an analytical potential energy surface (PES) of the ground state of the [H, O, I] system. A total of 5344 points were fit to a three-dimensional function of the internuclear distances, with a global root-mean-square error of 1.26 kcal mol(-1). The resulting PES describes accurately the main features of this system: the HOI and HIO isomers, the transition state between them, and all dissociation asymptotes. After a small adjustment, using a scaling factor on the internal coordinates of HOI, the frequencies calculated in this work agree with the experimental data available within 10 cm(-1). (C) 2011 American Institute of Physics. [doi: 10.1063/1.3615545]
Resumo:
Surge flow phenomena. e.g.. as a consequence of a dam failure or a flash flood, represent free boundary problems. ne extending computational domain together with the discontinuities involved renders their numerical solution a cumbersome procedure. This contribution proposes an analytical solution to the problem, It is based on the slightly modified zero-inertia (ZI) differential equations for nonprismatic channels and uses exclusively physical parameters. Employing the concept of a momentum-representative cross section of the moving water body together with a specific relationship for describing the cross sectional geometry leads, after considerable mathematical calculus. to the analytical solution. The hydrodynamic analytical model is free of numerical troubles, easy to run, computationally efficient. and fully satisfies the law of volume conservation. In a first test series, the hydrodynamic analytical ZI model compares very favorably with a full hydrodynamic numerical model in respect to published results of surge flow simulations in different types of prismatic channels. In order to extend these considerations to natural rivers, the accuracy of the analytical model in describing an irregular cross section is investigated and tested successfully. A sensitivity and error analysis reveals the important impact of the hydraulic radius on the velocity of the surge, and this underlines the importance of an adequate description of the topography, The new approach is finally applied to simulate a surge propagating down the irregularly shaped Isar Valley in the Bavarian Alps after a hypothetical dam failure. The straightforward and fully stable computation of the flood hydrograph along the Isar Valley clearly reflects the impact of the strongly varying topographic characteristics on the How phenomenon. Apart from treating surge flow phenomena as a whole, the analytical solution also offers a rigorous alternative to both (a) the approximate Whitham solution, for generating initial values, and (b) the rough volume balance techniques used to model the wave tip in numerical surge flow computations.
Resumo:
A hierarchical matrix is an efficient data-sparse representation of a matrix, especially useful for large dimensional problems. It consists of low-rank subblocks leading to low memory requirements as well as inexpensive computational costs. In this work, we discuss the use of the hierarchical matrix technique in the numerical solution of a large scale eigenvalue problem arising from a finite rank discretization of an integral operator. The operator is of convolution type, it is defined through the first exponential-integral function and, hence, it is weakly singular. We develop analytical expressions for the approximate degenerate kernels and deduce error upper bounds for these approximations. Some computational results illustrating the efficiency and robustness of the approach are presented.
Resumo:
BACKGROUND Missed, delayed or incorrect diagnoses are considered to be diagnostic errors. The aim of this paper is to describe the methodology of a study to analyse cognitive aspects of the process by which primary care (PC) physicians diagnose dyspnoea. It examines the possible links between the use of heuristics, suboptimal cognitive acts and diagnostic errors, using Reason's taxonomy of human error (slips, lapses, mistakes and violations). The influence of situational factors (professional experience, perceived overwork and fatigue) is also analysed. METHODS Cohort study of new episodes of dyspnoea in patients receiving care from family physicians and residents at PC centres in Granada (Spain). With an initial expected diagnostic error rate of 20%, and a sampling error of 3%, 384 episodes of dyspnoea are calculated to be required. In addition to filling out the electronic medical record of the patients attended, each physician fills out 2 specially designed questionnaires about the diagnostic process performed in each case of dyspnoea. The first questionnaire includes questions on the physician's initial diagnostic impression, the 3 most likely diagnoses (in order of likelihood), and the diagnosis reached after the initial medical history and physical examination. It also includes items on the physicians' perceived overwork and fatigue during patient care. The second questionnaire records the confirmed diagnosis once it is reached. The complete diagnostic process is peer-reviewed to identify and classify the diagnostic errors. The possible use of heuristics of representativeness, availability, and anchoring and adjustment in each diagnostic process is also analysed. Each audit is reviewed with the physician responsible for the diagnostic process. Finally, logistic regression models are used to determine if there are differences in the diagnostic error variables based on the heuristics identified. DISCUSSION This work sets out a new approach to studying the diagnostic decision-making process in PC, taking advantage of new technologies which allow immediate recording of the decision-making process.
Resumo:
In epidemiologic studies, measurement error in dietary variables often attenuates association between dietary intake and disease occurrence. To adjust for the attenuation caused by error in dietary intake, regression calibration is commonly used. To apply regression calibration, unbiased reference measurements are required. Short-term reference measurements for foods that are not consumed daily contain excess zeroes that pose challenges in the calibration model. We adapted two-part regression calibration model, initially developed for multiple replicates of reference measurements per individual to a single-replicate setting. We showed how to handle excess zero reference measurements by two-step modeling approach, how to explore heteroscedasticity in the consumed amount with variance-mean graph, how to explore nonlinearity with the generalized additive modeling (GAM) and the empirical logit approaches, and how to select covariates in the calibration model. The performance of two-part calibration model was compared with the one-part counterpart. We used vegetable intake and mortality data from European Prospective Investigation on Cancer and Nutrition (EPIC) study. In the EPIC, reference measurements were taken with 24-hour recalls. For each of the three vegetable subgroups assessed separately, correcting for error with an appropriately specified two-part calibration model resulted in about three fold increase in the strength of association with all-cause mortality, as measured by the log hazard ratio. Further found is that the standard way of including covariates in the calibration model can lead to over fitting the two-part calibration model. Moreover, the extent of adjusting for error is influenced by the number and forms of covariates in the calibration model. For episodically consumed foods, we advise researchers to pay special attention to response distribution, nonlinearity, and covariate inclusion in specifying the calibration model.