130 resultados para Medications errors
em University of Queensland eSpace - Australia
Resumo:
Potential errors in the application of mixture theory to the analysis of multiple-frequency bioelectrical impedance data for the determination of body fluid volumes are assessed. Potential sources of error include: conductive length; tissue fluid resistivity; body density; weight and technical errors of measurement. Inclusion of inaccurate estimates of body density and weight introduce errors of typically < +/-3% but incorrect assumptions regarding conductive length or fluid resistivities may each incur errors of up to 20%.
Resumo:
The truncation errors associated with finite difference solutions of the advection-dispersion equation with first-order reaction are formulated from a Taylor analysis. The error expressions are based on a general form of the corresponding difference equation and a temporally and spatially weighted parametric approach is used for differentiating among the various finite difference schemes. The numerical truncation errors are defined using Peclet and Courant numbers and a new Sink/Source dimensionless number. It is shown that all of the finite difference schemes suffer from truncation errors. Tn particular it is shown that the Crank-Nicolson approximation scheme does not have second order accuracy for this case. The effects of these truncation errors on the solution of an advection-dispersion equation with a first order reaction term are demonstrated by comparison with an analytical solution. The results show that these errors are not negligible and that correcting the finite difference scheme for them results in a more accurate solution. (C) 1999 Elsevier Science B.V. All rights reserved.
Resumo:
In the adult olfactory nerve pathway of rodents, each primary olfactory axon forms a terminal arbor in a single glomerulus in the olfactory bulb. During development, axons are believed to project directly to and terminate precisely within a glomerulus without any exuberant growth or mistargeting. To gain insight into mechanisms underlying this process, the trajectories of primary olfactory axons during glomerular formation were studied in the neonatal period. Histochemical staining of mouse olfactory bulb sections with the lectin Dolichos biflorus-agglutinin revealed that many olfactory axons overshoot the glomerular layer and course into the deeper laminae of the bulb in the early postnatal period. Single primary olfactory axons were anterogradely labelled either with the lipophilic carbocyanine dye, 1,1'-dioctodecyl-3,3,3',3'-tetramethylindocarbocyanine perchlorate (DiI), or with horseradish peroxidase (HRP) by localized microinjections into the nerve fiber layer of the rat olfactory bulb. Five distinct trajectories of primary olfactory axons were observed in DLI-labelled preparations at postnatal day 1.5 (P1.5). Axons either coursed directly to and terminated specifically within a glomerulus, branched before terminating in a glomerulus, bypassed glomeruli and entered the underlying external plexiform layer, passed through the glomerular layer with side branches into glomeruli, or branched into more than one glomerulus. HRP-labelled axon arbors from eight postnatal ages were reconstructed by camera lucida and were used to determine arbor length, arbor area, and arbor branch number. Whereas primary olfactory axons display errors in laminar targeting in the mammalian olfactory bulb, axon arbors typically achieve their adult morphology without exuberant growth. Many olfactory axons appear not to recognize appropriate cues to terminate within the glomerular layer during the early postnatal period. However, primary olfactory axons exhibit precise targeting in the glomerular layer after P5.5, indicating temporal differences in either the presence of guidance cues or the ability of axons to respond to these cues. (C) 1999 Wiley-Liss, Inc.
Resumo:
We show that quantum feedback control can be used as a quantum-error-correction process for errors induced by a weak continuous measurement. In particular, when the error model is restricted to one, perfectly measured, error channel per physical qubit, quantum feedback can act to perfectly protect a stabilizer codespace. Using the stabilizer formalism we derive an explicit scheme, involving feedback and an additional constant Hamiltonian, to protect an (n-1)-qubit logical state encoded in n physical qubits. This works for both Poisson (jump) and white-noise (diffusion) measurement processes. Universal quantum computation is also possible in this scheme. As an example, we show that detected-spontaneous emission error correction with a driving Hamiltonian can greatly reduce the amount of redundancy required to protect a state from that which has been previously postulated [e.g., Alber , Phys. Rev. Lett. 86, 4402 (2001)].
Resumo:
Medication errors are a leading cause of unintended harm to patients in Australia and internationally. Research in this area has paid relatively little attention to the interactions between organisational factors and violations of procedures in producing errors, although violations have been found to increase the likelihood of these errors. This study investigated the role of organisational factors in contributing to violations by nurses when administering medications. Data were collected using a self-report questionnaire completed by 506 nurses working in either rural or remote areas in Queensland, Australia. This instrument was used to develop a path model wherein organisational variables predicted 21% of the variance in self-reported violations. Expectations of medical officers mediated the relationship between working conditions of nursing staff and violation behaviour.
Resumo:
Background There are few population-based data on long-term management of patients after coronary artery bypass graft (CABG), despite the high risk for future major vascular events among this group. We assessed the prevalence and correlates of pharmacotherapy for prevention of new cardiac events in a large population-based series. Methods A postal survey was conducted of 2500 randomly selected survivors from a state population of patients 6 to 20 years after first CABG. Results Response was 82% (n = 2061). Use of antiplatelet agents (80%) and statins (64%) declined as age increased. Other independent predictors of antiplatelet use included statin use (odds ratio [OR] 1.6, 95% CI 1.26-2.05) and recurrent angina (OR 1.6, CI 1.17-2.06). Current smokers were less likely to use aspirin (OR 0.59, CI 0.4-0.89). Statin use was associated with reported high cholesterol (OR 24.4, CI 8.4-32.4), management by a cardiologist (OR 2.3, CI 1.8-3.0), and the use of calcium channel-blockers. Patients reporting hypertension or heart failure, in addition to high cholesterol, were less likely to use statins. Angiotensin-converting enzyme inhibitors were the most commonly prescribed agents for management of hypertension (59%) and were more frequently used among patients with diabetes and those with symptoms of heart failure. Overall 42% of patients were on angiotensin-converting enzyme inhibitors and 36% on beta-blockers. Conclusions Gaps exist in the use of-recommended medications after CABG. Lower anti-platelet and statin use was associated with older age, freedom from angina, comorbid heart failure or hypertension, and not regularly visiting a cardiologist. Patients who continue to smoke might be less likely to adhere to prescribed medications.
Resumo:
In this paper use consider the problem of providing standard errors of the component means in normal mixture models fitted to univariate or multivariate data by maximum likelihood via the EM algorithm. Two methods of estimation of the standard errors are considered: the standard information-based method and the computationally-intensive bootstrap method. They are compared empirically by their application to three real data sets and by a small-scale Monte Carlo experiment.
Resumo:
Fixed-point roundoff noise in digital implementation of linear systems arises due to overflow, quantization of coefficients and input signals, and arithmetical errors. In uniform white-noise models, the last two types of roundoff errors are regarded as uniformly distributed independent random vectors on cubes of suitable size. For input signal quantization errors, the heuristic model is justified by a quantization theorem, which cannot be directly applied to arithmetical errors due to the complicated input-dependence of errors. The complete uniform white-noise model is shown to be valid in the sense of weak convergence of probabilistic measures as the lattice step tends to zero if the matrices of realization of the system in the state space satisfy certain nonresonance conditions and the finite-dimensional distributions of the input signal are absolutely continuous.