848 resultados para Errors and omission


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Attention deficit hyperactivity disorder (ADHD) and autism are two neurodevelopmental disorders associated with prominent executive dysfunction, which may be underpinned by disruption within fronto-striatal and fronto-parietal circuits. We probed executive function in these disorders using a sustained attention task with a validated brain-behaviour basis. Twenty-three children with ADHD, 21 children with high-functioning autism (HFA) and 18 control children were tested on the Sustained Attention to Response Task (SART). In a fixed sequence version of the task, children were required to withhold their response to a predictably occurring no-go target (3) in a 1-9 digit sequence; in the random version the sequence was unpredictable. The ADHD group showed clear deficits in response inhibition and sustained attention, through higher errors of commission and omission on both SART versions. The HFA group showed no sustained attention deficits, through a normal number of omission errors on both SART versions. The HFA group showed dissociation in response inhibition performance, as indexed by commission errors. On the Fixed SART, a normal number of errors was made, however when the stimuli were randomised, the HFA group made as many commission errors as the ADHD group. Greater slow-frequency variability in response time and a slowing in mean response time by the ADHD group suggested impaired arousal processes. The ADHD group showed greater fast-frequency variability in response time, indicative of impaired top-down control, relative to the HFA and control groups. These data imply involvement of fronto-parietal attentional networks and sub-cortical arousal systems in the pathology of ADHD and prefromal cortex dysfunction in children with HFA. (c) 2007 Elsevier Ltd. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Past measurements of the radiocarbon interhemispheric offset have been restricted to relatively young samples because of a lack of older dendrochronologically secure Southern Hemisphere tree-ring chronologies. The Southern Hemisphere calibration data set SHCal04 earlier than AD 950 utilizes a variable interhemispheric offset derived from measured 2nd millennium AD Southern Hemisphere/Northern Hemisphere sample pairs with the assumption of stable Holocene ocean/ atmosphere interactions. This study extends the range of measured interhemispheric offset values with 20 decadal New Zealand kauri and Irish oak sample pairs from 3 selected time intervals in the 1st millennium AD and is part of a larger program to obtain high-precision Southern Hemisphere 14C data continuously back to 200 BC. We found an average interhemispheric offset of 35 ± 6 yr, which although consistent with previously published 2nd millennium AD measurements, is lower than the offset of 55–58 yr utilized in SHCal04. We concur with McCormac et al. (2008) that the IntCal04 measurement for AD 775 may indeed be slightly too old but also suggest the McCormac results appear excessively young for the interval AD 755–785. In addition, we raise the issue of laboratory bias and calibration errors, and encourage all laboratories to check their consistency with appropriate calibration curves and invest more effort into improving the accuracy of those curves.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper presents a new method for online determination of the Thèvenin equivalent parameters of a power system at a given node using the local PMU measurements at that node. The method takes into account the measurement errors and the changes in the system side. An analysis of the effects of changes in system side is carried out on a simple two-bus system to gain an insight of the effect of system side changes on the estimated Thévenin equivalent parameters. The proposed method uses voltage and current magnitudes as well as active and reactive powers; thus avoiding the effect of phase angle drift of the PMU and the need to synchronize measurements at different instances to the same reference. Applying the method to the IEEE 30-bus test system has shown its ability to correctly determine the Thévenin equivalent even in the presence of measurement errors and/or system side changes.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Measurement is the act or the result of a quantitative comparison between a given quantity and a quantity of the same kind chosen as a unit. It is generally agreed that all measurements contain errors. In a measuring system where both a measuring instrument and a human being taking the measurement using a preset process, the measurement error could be due to the instrument, the process or the human being involved. The first part of the study is devoted to understanding the human errors in measurement. For that, selected person related and selected work related factors that could affect measurement errors have been identified. Though these are well known, the exact extent of the error and the extent of effect of different factors on human errors in measurement are less reported. Characterization of human errors in measurement is done by conducting an experimental study using different subjects, where the factors were changed one at a time and the measurements made by them recorded. From the pre‐experiment survey research studies, it is observed that the respondents could not give the correct answers to questions related to the correct values [extent] of human related measurement errors. This confirmed the fears expressed regarding lack of knowledge about the extent of human related measurement errors among professionals associated with quality. But in postexperiment phase of survey study, it is observed that the answers regarding the extent of human related measurement errors has improved significantly since the answer choices were provided based on the experimental study. It is hoped that this work will help users of measurement in practice to better understand and manage the phenomena of human related errors in measurement.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In principle the global mean geostrophic surface circulation of the ocean can be diagnosed by subtracting a geoid from a mean sea surface (MSS). However, because the resulting mean dynamic topography (MDT) is approximately two orders of magnitude smaller than either of the constituent surfaces, and because the geoid is most naturally expressed as a spectral model while the MSS is a gridded product, in practice complications arise. Two algorithms for combining MSS and satellite-derived geoid data to determine the ocean’s mean dynamic topography (MDT) are considered in this paper: a pointwise approach, whereby the gridded geoid height field is subtracted from the gridded MSS; and a spectral approach, whereby the spherical harmonic coefficients of the geoid are subtracted from an equivalent set of coefficients representing the MSS, from which the gridded MDT is then obtained. The essential difference is that with the latter approach the MSS is truncated, a form of filtering, just as with the geoid. This ensures that errors of omission resulting from the truncation of the geoid, which are small in comparison to the geoid but large in comparison to the MDT, are matched, and therefore negated, by similar errors of omission in the MSS. The MDTs produced by both methods require additional filtering. However, the spectral MDT requires less filtering to remove noise, and therefore it retains more oceanographic information than its pointwise equivalent. The spectral method also results in a more realistic MDT at coastlines. 1. Introduction An important challenge in oceanography is the accurate determination of the ocean’s time-mean dynamic topography (MDT). If this can be achieved with sufficient accuracy for combination with the timedependent component of the dynamic topography, obtainable from altimetric data, then the resulting sum (i.e., the absolute dynamic topography) will give an accurate picture of surface geostrophic currents and ocean transports.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

ATSR-2 active fire data from 1996 to 2000, TRMM VIRS fire counts from 1998 to 2000 and burn scars derived from SPOT VEGETATION ( the Global Burnt Area 2000 product) were mapped for Peru and Bolivia to analyse the spatial distribution of burning and its intra- and inter-annual variability. The fire season in the region mainly occurs between May and October; though some variation was found between the six broad habitat types analysed: desert, grassland, savanna, dry forest, moist forest and yungas (the forested valleys on the eastern slope of the Andes). Increased levels of burning were generally recorded in ATSR-2 and TRMM VIRS fire data in response to the 1997/1998 El Nino, but in some areas the El Nino effect was masked by the more marked influences of socio-economic change on land use and land cover. There were differences between the three global datasets: ATSR-2 under-recorded fires in ecosystems with low net primary productivities. This was because fires are set during the day in this region and, when fuel loads are low, burn out before the ATSR-2 overpass in the region which is between 02.45 h and 03.30 h. TRMM VIRS was able to detect these fires because its overpasses cover the entire diurnal range on a monthly basis. The GBA2000 product has significant errors of commission (particularly areas of shadow in the well-dissected eastern Andes) and omission (in the agricultural zone around Santa Cruz, Bolivia and in north-west Peru). Particular attention was paid to biomass burning in high-altitude grasslands, where fire is an important pastoral management technique. Fires and burn scars from Landsat Thematic Mapper (TM) and Enhanced Thematic Mapper (ETM) data for a range of years between 1987 and 2000 were mapped for areas around Parque Nacional Rio Abiseo (Peru) and Parque Nacional Carrasco (Bolivia). Burn scars mapped in the grasslands of these two areas indicate far more burning had taken place than either the fires or the burn scars derived from global datasets. Mean scar sizes are smaller and have a smaller range in size between years the in the study area in Peru (6.6-7.1 ha) than Bolivia (16.9-162.5 ha). Trends in biomass burning in the two highland areas can be explained in terms of the changing socio-economic environments and impacts of conservation. The mismatch between the spatial scale of biomass burning in the high-altitude grasslands and the sensors used to derive global fire products means that an entire component of the fire regime in the region studied is omitted, despite its importance in the farming systems on the Andes.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Rationale: In UK hospitals, the preparation of all total parenteral nutrition (TPN) products must be made in the pharmacy as TPNs are categorised as high-risk injectables (NPSA/2007/20). The National Aseptic Error Reporting Scheme has been collecting data on pharmacy compounding errors in the UK since August 2003. This study reports on types of error associated with the preparation of TPNs, including the stage at which these were identified and potential and actual patient outcomes. Methods: Reports of compounding errors for the period 1/2004 - 3/2007 were analysed on an Excel spreadsheet. Results: Of a total of 3691 compounding error reports, 674 (18%) related to TPN products; 548 adult vs. 126 paediatric. A significantly higher proportion of adult TPNs (28% vs. 13% paediatric) were associated with labelling errors and a significantly higher proportion of paediatric TPNs (25% vs. 15% adult) were associated with incorrect transcriptions (Chi-Square Test; p<0.005). Labelling errors were identified equally by pharmacists (42%) and technicians (48%) with technicians detecting mainly at first check and pharmacists at final check. Transcription errors were identified mainly by technicians (65% vs. 27% pharmacist) at first check. Incorrect drug selection (13%) and calculation errors (9%) were associated with adult and paediatric TPN preparations in the same ratio. One paediatric TPN error detected at first check was considered potentially catastrophic; 31 (5%) errors were considered of major and 38 (6%) of moderate potential consequence. Five errors (2 moderate, 1 minor) were identified during or after administration. Conclusions: While recent UK patient safety initiatives are aimed at improving the safety of injectable medicines in clinical areas, the current study highlights safety problems that exist within pharmacy production units. This could be used in the creation of an error management tool for TPN compounding processes within hospital pharmacies.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Background: Medication errors in general practice are an important source of potentially preventable morbidity and mortality. Building on previous descriptive, qualitative and pilot work, we sought to investigate the effectiveness, cost-effectiveness and likely generalisability of a complex pharm acist-led IT-based intervention aiming to improve prescribing safety in general practice. Objectives: We sought to: • Test the hypothesis that a pharmacist-led IT-based complex intervention using educational outreach and practical support is more effective than simple feedback in reducing the proportion of patients at risk from errors in prescribing and medicines management in general practice. • Conduct an economic evaluation of the cost per error avoided, from the perspective of the National Health Service (NHS). • Analyse data recorded by pharmacists, summarising the proportions of patients judged to be at clinical risk, the actions recommended by pharmacists, and actions completed in the practices. • Explore the views and experiences of healthcare professionals and NHS managers concerning the intervention; investigate potential explanations for the observed effects, and inform decisions on the future roll-out of the pharmacist-led intervention • Examine secular trends in the outcome measures of interest allowing for informal comparison between trial practices and practices that did not participate in the trial contributing to the QRESEARCH database. Methods Two-arm cluster randomised controlled trial of 72 English general practices with embedded economic analysis and longitudinal descriptive and qualitative analysis. Informal comparison of the trial findings with a national descriptive study investigating secular trends undertaken using data from practices contributing to the QRESEARCH database. The main outcomes of interest were prescribing errors and medication monitoring errors at six- and 12-months following the intervention. Results: Participants in the pharmacist intervention arm practices were significantly less likely to have been prescribed a non-selective NSAID without a proton pump inhibitor (PPI) if they had a history of peptic ulcer (OR 0.58, 95%CI 0.38, 0.89), to have been prescribed a beta-blocker if they had asthma (OR 0.73, 95% CI 0.58, 0.91) or (in those aged 75 years and older) to have been prescribed an ACE inhibitor or diuretic without a measurement of urea and electrolytes in the last 15 months (OR 0.51, 95% CI 0.34, 0.78). The economic analysis suggests that the PINCER pharmacist intervention has 95% probability of being cost effective if the decision-maker’s ceiling willingness to pay reaches £75 (6 months) or £85 (12 months) per error avoided. The intervention addressed an issue that was important to professionals and their teams and was delivered in a way that was acceptable to practices with minimum disruption of normal work processes. Comparison of the trial findings with changes seen in QRESEARCH practices indicated that any reductions achieved in the simple feedback arm were likely, in the main, to have been related to secular trends rather than the intervention. Conclusions Compared with simple feedback, the pharmacist-led intervention resulted in reductions in proportions of patients at risk of prescribing and monitoring errors for the primary outcome measures and the composite secondary outcome measures at six-months and (with the exception of the NSAID/peptic ulcer outcome measure) 12-months post-intervention. The intervention is acceptable to pharmacists and practices, and is likely to be seen as costeffective by decision makers.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Objective To determine the prevalence and nature of prescribing and monitoring errors in general practices in England. Design Retrospective case note review of unique medication items prescribed over a 12 month period to a 2% random sample of patients. Mixed effects logistic regression was used to analyse the data. Setting Fifteen general practices across three primary care trusts in England. Data sources Examination of 6048 unique prescription items prescribed over the previous 12 months for 1777 patients. Main outcome measures Prevalence of prescribing and monitoring errors, and severity of errors, using validated definitions. Results Prescribing and/or monitoring errors were detected in 4.9% (296/6048) of all prescription items (95% confidence interval 4.4 - 5.5%). The vast majority of errors were of mild to moderate severity, with 0.2% (11/6048) of items having a severe error. After adjusting for covariates, patient-related factors associated with an increased risk of prescribing and/or monitoring errors were: age less than 15 (Odds Ratio (OR) 1.87, 1.19 to 2.94, p=0.006) or greater than 64 years (OR 1.68, 1.04 to 2.73, p=0.035), and higher numbers of unique medication items prescribed (OR 1.16, 1.12 to 1.19, p<0.001). Conclusion Prescribing and monitoring errors are common in English general practice, although severe errors are unusual. Many factors increase the risk of error. Having identified the most common and important errors, and the factors associated with these, strategies to prevent future errors should be developed based on the study findings.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Introduction: Care home residents are at particular risk from medication errors, and our objective was to determine the prevalence and potential harm of prescribing, monitoring, dispensing and administration errors in UK care homes, and to identify their causes. Methods: A prospective study of a random sample of residents within a purposive sample of homes in three areas. Errors were identified by patient interview, note review, observation of practice and examination of dispensed items. Causes were understood by observation and from theoretically framed interviews with home staff, doctors and pharmacists. Potential harm from errors was assessed by expert judgement. Results: The 256 residents recruited in 55 homes were taking a mean of 8.0 medicines. One hundred and seventy-eight (69.5%) of residents had one or more errors. The mean number per resident was 1.9 errors. The mean potential harm from prescribing, monitoring, administration and dispensing errors was 2.6, 3.7, 2.1 and 2.0 (0 = no harm, 10 = death), respectively. Contributing factors from the 89 interviews included doctors who were not accessible, did not know the residents and lacked information in homes when prescribing; home staff’s high workload, lack of medicines training and drug round interruptions; lack of team work among home, practice and pharmacy; inefficient ordering systems; inaccurate medicine records and prevalence of verbal communication; and difficult to fill (and check) medication administration systems. Conclusions: That two thirds of residents were exposed to one or more medication errors is of concern. The will to improve exists, but there is a lack of overall responsibility. Action is required from all concerned.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The present article examines production and on-line processing of definite articles in Turkish-speaking sequential bilingual children acquiring English and Dutch as second languages (L2) in the UK and in the Netherlands, respectively. Thirty-nine 6–8-year-old L2 children and 48 monolingual (L1) age-matched children participated in two separate studies examining the production of definite articles in English and Dutch in conditions manipulating semantic context, that is, the anaphoric and the bridging contexts. Sensitivity to article omission was examined in the same groups of children using an on-line processing task involving article use in the same semantic contexts as in the production task. The results indicate that both L2 children and L1 controls are less accurate when definiteness is established by keeping track of the discourse referents (anaphoric) than when it is established via world knowledge (bridging). Moreover, despite variable production, all groups of children were sensitive to the omission of definite articles in the on-line comprehension task. This suggests that the errors of omission are not due to the lack of abstract syntactic representations, but could result from processes implicated in the spell-out of definite articles. The findings are in line with the idea that variable production in child L2 learners does not necessarily indicate lack of abstract representations (Haznedar and Schwartz, 1997).

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper considers two-sided tests for the parameter of an endogenous variable in an instrumental variable (IV) model with heteroskedastic and autocorrelated errors. We develop the nite-sample theory of weighted-average power (WAP) tests with normal errors and a known long-run variance. We introduce two weights which are invariant to orthogonal transformations of the instruments; e.g., changing the order in which the instruments appear. While tests using the MM1 weight can be severely biased, optimal tests based on the MM2 weight are naturally two-sided when errors are homoskedastic. We propose two boundary conditions that yield two-sided tests whether errors are homoskedastic or not. The locally unbiased (LU) condition is related to the power around the null hypothesis and is a weaker requirement than unbiasedness. The strongly unbiased (SU) condition is more restrictive than LU, but the associated WAP tests are easier to implement. Several tests are SU in nite samples or asymptotically, including tests robust to weak IV (such as the Anderson-Rubin, score, conditional quasi-likelihood ratio, and I. Andrews' (2015) PI-CLC tests) and two-sided tests which are optimal when the sample size is large and instruments are strong. We refer to the WAP-SU tests based on our weights as MM1-SU and MM2-SU tests. Dropping the restrictive assumptions of normality and known variance, the theory is shown to remain valid at the cost of asymptotic approximations. The MM2-SU test is optimal under the strong IV asymptotics, and outperforms other existing tests under the weak IV asymptotics.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Due to the high industrial competitiveness, the rigorous laws of environmental protection, the necessary reduction of costs, the mechanical industry sees itself forced to worry more and more with the refinement of your processes and products. In this context, can be mentioned the need to eliminate the roundness errors that appear after the grinding process. This work has the objective of verifying if optimized nozzles for the application of cutting fluid in the grinding process can minimize the formation of the roundness errors and the diametrical wear of grinding wheel in the machining of the steel VC 131 with 60 HRc, when compared to the conventional nozzles. These nozzles were analyzed using two types of grinding wheels and two different cutting fluids. Was verified that the nozzle of 3mm of diameter, integral oil and the CBN grinding wheel, were the best options to obtain smaller roundness errors and the lowest diametrical wears of grinding wheels.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Das aSPECT Spektrometer wurde entworfen, um das Spektrum der Protonen beimrnZerfall freier Neutronen mit hoher Präzision zu messen. Aus diesem Spektrum kann dann der Elektron-Antineutrino Winkelkorrelationskoeffizient "a" mit hoher Genauigkeit bestimmt werden. Das Ziel dieses Experiments ist es, diesen Koeffizienten mit einem absoluten relativen Fehler von weniger als 0.3% zu ermitteln, d.h. deutlich unter dem aktuellen Literaturwert von 5%.rnrnErste Messungen mit dem aSPECT Spektrometer wurden an der Forschungsneutronenquelle Heinz Maier-Leibnitz in München durchgeführt. Jedoch verhinderten zeitabhängige Instabilitäten des Meßhintergrunds eine neue Bestimmung von "a".rnrnDie vorliegende Arbeit basiert hingegen auf den letzten Messungen mit dem aSPECTrnSpektrometer am Institut Laue-Langevin (ILL) in Grenoble, Frankreich. Bei diesen Messungen konnten die Instabilitäten des Meßhintergrunds bereits deutlich reduziert werden. Weiterhin wurden verschiedene Veränderungen vorgenommen, um systematische Fehler zu minimieren und um einen zuverlässigeren Betrieb des Experiments sicherzustellen. Leider konnte aber wegen zu hohen Sättigungseffekten der Empfängerelektronik kein brauchbares Ergebnis gemessen werden. Trotzdem konnten diese und weitere systematische Fehler identifiziert und verringert, bzw. sogar teilweise eliminiert werden, wovon zukünftigernStrahlzeiten an aSPECT profitieren werden.rnrnDer wesentliche Teil der vorliegenden Arbeit befasst sich mit der Analyse und Verbesserung der systematischen Fehler, die durch das elektromagnetische Feld aSPECTs hervorgerufen werden. Hieraus ergaben sich vielerlei Verbesserungen, insbesondere konnten die systematischen Fehler durch das elektrische Feld verringert werden. Die durch das Magnetfeld verursachten Fehler konnten sogar soweit minimiert werden, dass nun eine Verbesserung des aktuellen Literaturwerts von "a" möglich ist. Darüber hinaus wurde in dieser Arbeit ein für den Versuch maßgeschneidertes NMR-Magnetometer entwickelt und soweit verbessert, dass nun Unsicherheiten bei der Charakterisierung des Magnetfeldes soweit reduziert wurden, dass sie für die Bestimmung von "a" mit einer Genauigkeit von mindestens 0.3% vernachlässigbar sind.