941 resultados para Errors de sistemes (Enginyeria) -- Localització
Resumo:
A 'pseudo-Bayesian' interpretation of standard errors yields a natural induced smoothing of statistical estimating functions. When applied to rank estimation, the lack of smoothness which prevents standard error estimation is remedied. Efficiency and robustness are preserved, while the smoothed estimation has excellent computational properties. In particular, convergence of the iterative equation for standard error is fast, and standard error calculation becomes asymptotically a one-step procedure. This property also extends to covariance matrix calculation for rank estimates in multi-parameter problems. Examples, and some simple explanations, are given.
Resumo:
Purpose This study evaluated the impact of patient set-up errors on the probability of pulmonary and cardiac complications in the irradiation of left-sided breast cancer. Methods and Materials Using the CMS XiO Version 4.6 (CMS Inc., St Louis, MO) radiotherapy planning system's NTCP algorithm and the Lyman -Kutcher-Burman (LKB) model, we calculated the DVH indices for the ipsilateral lung and heart and the resultant normal tissue complication probabilities (NTCP) for radiation-induced pneumonitis and excess cardiac mortality in 12 left-sided breast cancer patients. Results Isocenter shifts in the posterior direction had the greatest effect on the lung V20, heart V25, mean and maximum doses to the lung and the heart. Dose volume histograms (DVH) results show that the ipsilateral lung V20 tolerance was exceeded in 58% of the patients after 1cm posterior shifts. Similarly, the heart V25 tolerance was exceeded after 1cm antero-posterior and left-right isocentric shifts in 70% of the patients. The baseline NTCPs for radiation-induced pneumonitis ranged from 0.73% - 3.4% with a mean value of 1.7%. The maximum reported NTCP for radiation-induced pneumonitis was 5.8% (mean 2.6%) after 1cm posterior isocentric shift. The NTCP for excess cardiac mortality were 0 % in 100% of the patients (n=12) before and after setup error simulations. Conclusions Set-up errors in left sided breast cancer patients have a statistically significant impact on the Lung NTCPs and DVH indices. However, with a central lung distance of 3cm or less (CLD <3cm), and a maximum heart distance of 1.5cm or less (MHD<1.5cm), the treatment plans could tolerate set-up errors of up to 1cm without any change in the NTCP to the heart.
Resumo:
Prequantization has been forwarded as a means to improve the performance of double phase holograms (DPHs). We show here that any improvement (even under the best of conditions) is not large enough to help the DPH to compete favourably with other holograms.
Resumo:
A 59-year-old man was mistakenly prescribed Slow-Na instead of Slow-K due to incorrect selection from a drop-down list in the prescribing software. This error was identified by a pharmacist during a home medicine review (HMR) before the patient began taking the supplement. The reported error emphasizes the need for vigilance due to the emergence of novel look-alike, sound-alike (LASA) drug pairings. This case highlights the important role of pharmacists in medication safety.
Resumo:
This thesis studies empirically whether measurement errors in aggregate production statistics affect sentiment and future output. Initial announcements of aggregate production are subject to measurement error, because many of the data required to compile the statistics are produced with a lag. This measurement error can be gauged as the difference between the latest revised statistic and its initial announcement. Assuming aggregate production statistics help forecast future aggregate production, these measurement errors are expected to affect macroeconomic forecasts. Assuming agents’ macroeconomic forecasts affect their production choices, these measurement errors should affect future output through sentiment. This thesis is primarily empirical, so the theoretical basis, strategic complementarity, is discussed quite briefly. However, it is a model in which higher aggregate production increases each agent’s incentive to produce. In this circumstance a statistical announcement which suggests aggregate production is high would increase each agent’s incentive to produce, thus resulting in higher aggregate production. In this way the existence of strategic complementarity provides the theoretical basis for output fluctuations caused by measurement mistakes in aggregate production statistics. Previous empirical studies suggest that measurement errors in gross national product affect future aggregate production in the United States. Additionally it has been demonstrated that measurement errors in the Index of Leading Indicators affect forecasts by professional economists as well as future industrial production in the United States. This thesis aims to verify the applicability of these findings to other countries, as well as study the link between measurement errors in gross domestic product and sentiment. This thesis explores the relationship between measurement errors in gross domestic production and sentiment and future output. Professional forecasts and consumer sentiment in the United States and Finland, as well as producer sentiment in Finland, are used as the measures of sentiment. Using statistical techniques it is found that measurement errors in gross domestic product affect forecasts and producer sentiment. The effect on consumer sentiment is ambiguous. The relationship between measurement errors and future output is explored using data from Finland, United States, United Kingdom, New Zealand and Sweden. It is found that measurement errors have affected aggregate production or investment in Finland, United States, United Kingdom and Sweden. Specifically, it was found that overly optimistic statistics announcements are associated with higher output and vice versa.
Resumo:
Background Medication safety is a pressing concern for residential aged care facilities (RACFs). Retrospective studies in RACF settings identify inadequate communication between RACFs, doctors, hospitals and community pharmacies as the major cause of medication errors. Existing literature offers limited insight about the gaps in the existing information exchange process that may lead to medication errors. The aim of this research was to explicate the cognitive distribution that underlies RACF medication ordering and delivery to identify gaps in medication-related information exchange which lead to medication errors in RACFs. Methods The study was undertaken in three RACFs in Sydney, Australia. Data were generated through ethnographic field work over a period of five months (May–September 2011). Triangulated analysis of data primarily focused on examining the transformation and exchange of information between different media across the process. Results The findings of this study highlight the extensive scope and intense nature of information exchange in RACF medication ordering and delivery. Rather than attributing error to individual care providers, the explication of distributed cognition processes enabled the identification of gaps in three information exchange dimensions which potentially contribute to the occurrence of medication errors namely: (1) design of medication charts which complicates order processing and record keeping (2) lack of coordination mechanisms between participants which results in misalignment of local practices (3) reliance on restricted communication bandwidth channels mainly telephone and fax which complicates the information processing requirements. The study demonstrates how the identification of these gaps enhances understanding of medication errors in RACFs. Conclusions Application of the theoretical lens of distributed cognition can assist in enhancing our understanding of medication errors in RACFs through identification of gaps in information exchange. Understanding the dynamics of the cognitive process can inform the design of interventions to manage errors and improve residents’ safety.
Resumo:
When infants are weighed at well baby or infant welfare clinics, the weight change from one visit to the next is used as a guide to the welfare of the child. Infant welfare clinic nurses are expert clinicians who use weight measurements as a rough indicator of well-being only, as it is well known by them that these measurements are fraught with error. This paper calculates the amount of error which was found in repeated tests of weights of infants, and in the weight changes brought about by biological variation. As a result, it is recommended that babies under nine months of age be weighed at clinic visits no less than a fortnight apart, and older infants, at least one month apart. If they are weighed more often, then the weight changes detected will be less than the amount of error which affects the measurements.
Resumo:
The aim of this study was to identify and describe the types of errors in clinical reasoning that contribute to poor diagnostic performance at different levels of medical training and experience. Three cohorts of subjects, second- and fourth- (final) year medical students and a group of general practitioners, completed a set of clinical reasoning problems. The responses of those whose scores fell below the 25th centile were analysed to establish the stage of the clinical reasoning process - identification of relevant information, interpretation or hypothesis generation - at which most errors occurred and whether this was dependent on problem difficulty and level of medical experience. Results indicate that hypothesis errors decrease as expertise increases but that identification and interpretation errors increase. This may be due to inappropriate use of pattern recognition or to failure of the knowledge base. Furthermore, although hypothesis errors increased in line with problem difficulty, identification and interpretation errors decreased. A possible explanation is that as problem difficulty increases, subjects at all levels of expertise are less able to differentiate between relevant and irrelevant clinical features and so give equal consideration to all information contained within a case. It is concluded that the development of clinical reasoning in medical students throughout the course of their pre-clinical and clinical education may be enhanced by both an analysis of the clinical reasoning process and a specific focus on each of the stages at which errors commonly occur.
Resumo:
The lifetime calculation of large dense sensor networks with fixed energy resources and the remaining residual energy have shown that for a constant energy resource in a sensor network the fault rate at the cluster head is network size invariant when using the network layer with no MAC losses.Even after increasing the battery capacities in the nodes the total lifetime does not increase after a max limit of 8 times. As this is a serious limitation lots of research has been done at the MAC layer which allows to adapt to the specific connectivity, traffic and channel polling needs for sensor networks. There have been lots of MAC protocols which allow to control the channel polling of new radios which are available to sensor nodes to communicate. This further reduces the communication overhead by idling and sleep scheduling thus extending the lifetime of the monitoring application. We address the two issues which effects the distributed characteristics and performance of connected MAC nodes. (1) To determine the theoretical minimum rate based on joint coding for a correlated data source at the singlehop, (2a) to estimate cluster head errors using Bayesian rule for routing using persistence clustering when node densities are the same and stored using prior probability at the network layer, (2b) to estimate the upper bound of routing errors when using passive clustering were the node densities at the multi-hop MACS are unknown and not stored at the multi-hop nodes a priori. In this paper we evaluate many MAC based sensor network protocols and study the effects on sensor network lifetime. A renewable energy MAC routing protocol is designed when the probabilities of active nodes are not known a priori. From theoretical derivations we show that for a Bayesian rule with known class densities of omega1, omega2 with expected error P* is bounded by max error rate of P=2P* for single-hop. We study the effects of energy losses using cross-layer simulation of - large sensor network MACS setup, the error rate which effect finding sufficient node densities to have reliable multi-hop communications due to unknown node densities. The simulation results show that even though the lifetime is comparable the expected Bayesian posterior probability error bound is close or higher than Pges2P*.
Resumo:
We have developed a technique to measure the absolute frequencies of optical transitions by using an evacuated Rb-stabilized ring-cavity resonator as a transfer cavity. The absolute frequency of the Rb D-2 line (at 780 nm) used to stabilize the cavity is known and allows us to determine the absolute value of the unknown frequency. We study wavelength-dependent errors due to dispersion at the cavity mirrors by measuring the frequency of the same transition in the Cs D-2 line (at 852 nm) at three cavity lengths. The spread in the values shows that dispersion errors are below 30 kHz, corresponding to a relative precision of 10(-10). We give an explanation for reduced dispersion errors in the ring-cavity geometry by calculating errors due to the lateral shift and the phase shift at the mirrors, and show that they are roughly equal but occur with opposite signs. We have earlier shown that diffraction errors (due to Guoy phase) are negligible in the ring-cavity geometry compared to a linear cavity; the reduced dispersion error is another advantage. Our values are consistent with measurements of the same transition using the more expensive frequency-comb technique. Our simpler method is ideally suited for measuring hyperfine structure, fine structure, and isotope shifts, up to several hundreds of gigahertz.
Resumo:
There have been several studies on the performance of TCP controlled transfers over an infrastructure IEEE 802.11 WLAN, assuming perfect channel conditions. In this paper, we develop an analytical model for the throughput of TCP controlled file transfers over the IEEE 802.11 DCF with different packet error probabilities for the stations, accounting for the effect of packet drops on the TCP window. Our analysis proceeds by combining two models: one is an extension of the usual TCP-over-DCF model for an infrastructure WLAN, where the throughput of a station depends on the probability that the head-of-the-line packet at the Access Point belongs to that station; the second is a model for the TCP window process for connections with different drop probabilities. Iterative calculations between these models yields the head-of-the-line probabilities, and then, performance measures such as the throughputs and packet failure probabilities can be derived. We find that, due to MAC layer retransmissions, packet losses are rare even with high channel error probabilities and the stations obtain fair throughputs even when some of them have packet error probabilities as high as 0.1 or 0.2. For some restricted settings we are also able to model tail-drop loss at the AP. Although involving many approximations, the model captures the system behavior quite accurately, as compared with simulations.
Resumo:
Regenerating codes are a class of codes proposed for providing reliability of data and efficient repair of failed nodes in distributed storage systems. In this paper, we address the fundamental problem of handling errors and erasures at the nodes or links, during the data-reconstruction and node-repair operations. We provide explicit regenerating codes that are resilient to errors and erasures, and show that these codes are optimal with respect to storage and bandwidth requirements. As a special case, we also establish the capacity of a class of distributed storage systems in the presence of malicious adversaries. While our code constructions are based on previously constructed Product-Matrix codes, we also provide necessary and sufficient conditions for introducing resilience in any regenerating code.