848 resultados para Errors and omission


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Non-driving related cognitive load and variations of emotional state may impact a driver’s capability to control a vehicle and introduces driving errors. Availability of reliable cognitive load and emotion detection in drivers would benefit the design of active safety systems and other intelligent in-vehicle interfaces. In this study, speech produced by 68 subjects while driving in urban areas is analyzed. A particular focus is on speech production differences in two secondary cognitive tasks, interactions with a co-driver and calls to automated spoken dialog systems (SDS), and two emotional states during the SDS interactions - neutral/negative. A number of speech parameters are found to vary across the cognitive/emotion classes. Suitability of selected cepstral- and production-based features for automatic cognitive task/emotion classification is investigated. A fusion of GMM/SVM classifiers yields an accuracy of 94.3% in cognitive task and 81.3% in emotion classification.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A pragmatic method for assessing the accuracy and precision of a given processing pipeline required for converting computed tomography (CT) image data of bones into representative three dimensional (3D) models of bone shapes is proposed. The method is based on coprocessing a control object with known geometry which enables the assessment of the quality of resulting 3D models. At three stages of the conversion process, distance measurements were obtained and statistically evaluated. For this study, 31 CT datasets were processed. The final 3D model of the control object contained an average deviation from reference values of −1.07±0.52 mm standard deviation (SD) for edge distances and −0.647±0.43 mm SD for parallel side distances of the control object. Coprocessing a reference object enables the assessment of the accuracy and precision of a given processing pipeline for creating CTbased 3D bone models and is suitable for detecting most systematic or human errors when processing a CT-scan. Typical errors have about the same size as the scan resolution.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This book disseminates current information pertaining to the modulatory effects of foods and other food substances on behavior and neurological pathways and, importantly, vice versa. This ranges from the neuroendocrine control of eating to the effects of life-threatening disease on eating behavior. The importance of this contribution to the scientific literature lies in the fact that food and eating are an essential component of cultural heritage but the effects of perturbations in the food/cognitive axis can be profound. The complex interrelationship between neuropsychological processing, diet, and behavioral outcome is explored within the context of the most contemporary psychobiological research in the area. This comprehensive psychobiology- and pathology-themed text examines the broad spectrum of diet, behavioral, and neuropsychological interactions from normative function to occurrences of severe and enduring psychopathological processes

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Aims: To develop clinical protocols for acquiring PET images, performing CT-PET registration and tumour volume definition based on the PET image data, for radiotherapy for lung cancer patients and then to test these protocols with respect to levels of accuracy and reproducibility. Method: A phantom-based quality assurance study of the processes associated with using registered CT and PET scans for tumour volume definition was conducted to: (1) investigate image acquisition and manipulation techniques for registering and contouring CT and PET images in a radiotherapy treatment planning system, and (2) determine technology-based errors in the registration and contouring processes. The outcomes of the phantom image based quality assurance study were used to determine clinical protocols. Protocols were developed for (1) acquiring patient PET image data for incorporation into the 3DCRT process, particularly for ensuring that the patient is positioned in their treatment position; (2) CT-PET image registration techniques and (3) GTV definition using the PET image data. The developed clinical protocols were tested using retrospective clinical trials to assess levels of inter-user variability which may be attributed to the use of these protocols. A Siemens Somatom Open Sensation 20 slice CT scanner and a Philips Allegro stand-alone PET scanner were used to acquire the images for this research. The Philips Pinnacle3 treatment planning system was used to perform the image registration and contouring of the CT and PET images. Results: Both the attenuation-corrected and transmission images obtained from standard whole-body PET staging clinical scanning protocols were acquired and imported into the treatment planning system for the phantom-based quality assurance study. Protocols for manipulating the PET images in the treatment planning system, particularly for quantifying uptake in volumes of interest and window levels for accurate geometric visualisation were determined. The automatic registration algorithms were found to have sub-voxel levels of accuracy, with transmission scan-based CT-PET registration more accurate than emission scan-based registration of the phantom images. Respiration induced image artifacts were not found to influence registration accuracy while inadequate pre-registration over-lap of the CT and PET images was found to result in large registration errors. A threshold value based on a percentage of the maximum uptake within a volume of interest was found to accurately contour the different features of the phantom despite the lower spatial resolution of the PET images. Appropriate selection of the threshold value is dependant on target-to-background ratios and the presence of respiratory motion. The results from the phantom-based study were used to design, implement and test clinical CT-PET fusion protocols. The patient PET image acquisition protocols enabled patients to be successfully identified and positioned in their radiotherapy treatment position during the acquisition of their whole-body PET staging scan. While automatic registration techniques were found to reduce inter-user variation compared to manual techniques, there was no significant difference in the registration outcomes for transmission or emission scan-based registration of the patient images, using the protocol. Tumour volumes contoured on registered patient CT-PET images using the tested threshold values and viewing windows determined from the phantom study, demonstrated less inter-user variation for the primary tumour volume contours than those contoured using only the patient’s planning CT scans. Conclusions: The developed clinical protocols allow a patient’s whole-body PET staging scan to be incorporated, manipulated and quantified in the treatment planning process to improve the accuracy of gross tumour volume localisation in 3D conformal radiotherapy for lung cancer. Image registration protocols which factor in potential software-based errors combined with adequate user training are recommended to increase the accuracy and reproducibility of registration outcomes. A semi-automated adaptive threshold contouring technique incorporating a PET windowing protocol, accurately defines the geometric edge of a tumour volume using PET image data from a stand alone PET scanner, including 4D target volumes.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Human error, its causes and consequences, and the ways in which it can be prevented, remain of great interest to road safety practitioners. This paper presents the findings derived from an on-road study of driver errors in which 25 participants drove a pre-determined route using MUARC's On-Road Test Vehicle (ORTeV). In-vehicle observers recorded the different errors made, and a range of other data was collected, including driver verbal protocols, forward, cockpit and driver video, and vehicle data (speed, braking, steering wheel angle, lane tracking etc). Participants also completed a post trial cognitive task analysis interview. The drivers tested made a range of different errors, with speeding violations, both intentional and unintentional, being the most common. Further more detailed analysis of a sub-set of specific error types indicates that driver errors have various causes, including failures in the wider road 'system' such as poor roadway design, infrastructure failures and unclear road rules. In closing, a range of potential error prevention strategies, including intelligent speed adaptation and road infrastructure design, are discussed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The performance of an adaptive filter may be studied through the behaviour of the optimal and adaptive coefficients in a given environment. This thesis investigates the performance of finite impulse response adaptive lattice filters for two classes of input signals: (a) frequency modulated signals with polynomial phases of order p in complex Gaussian white noise (as nonstationary signals), and (b) the impulsive autoregressive processes with alpha-stable distributions (as non-Gaussian signals). Initially, an overview is given for linear prediction and adaptive filtering. The convergence and tracking properties of the stochastic gradient algorithms are discussed for stationary and nonstationary input signals. It is explained that the stochastic gradient lattice algorithm has many advantages over the least-mean square algorithm. Some of these advantages are having a modular structure, easy-guaranteed stability, less sensitivity to the eigenvalue spread of the input autocorrelation matrix, and easy quantization of filter coefficients (normally called reflection coefficients). We then characterize the performance of the stochastic gradient lattice algorithm for the frequency modulated signals through the optimal and adaptive lattice reflection coefficients. This is a difficult task due to the nonlinear dependence of the adaptive reflection coefficients on the preceding stages and the input signal. To ease the derivations, we assume that reflection coefficients of each stage are independent of the inputs to that stage. Then the optimal lattice filter is derived for the frequency modulated signals. This is performed by computing the optimal values of residual errors, reflection coefficients, and recovery errors. Next, we show the tracking behaviour of adaptive reflection coefficients for frequency modulated signals. This is carried out by computing the tracking model of these coefficients for the stochastic gradient lattice algorithm in average. The second-order convergence of the adaptive coefficients is investigated by modeling the theoretical asymptotic variance of the gradient noise at each stage. The accuracy of the analytical results is verified by computer simulations. Using the previous analytical results, we show a new property, the polynomial order reducing property of adaptive lattice filters. This property may be used to reduce the order of the polynomial phase of input frequency modulated signals. Considering two examples, we show how this property may be used in processing frequency modulated signals. In the first example, a detection procedure in carried out on a frequency modulated signal with a second-order polynomial phase in complex Gaussian white noise. We showed that using this technique a better probability of detection is obtained for the reduced-order phase signals compared to that of the traditional energy detector. Also, it is empirically shown that the distribution of the gradient noise in the first adaptive reflection coefficients approximates the Gaussian law. In the second example, the instantaneous frequency of the same observed signal is estimated. We show that by using this technique a lower mean square error is achieved for the estimated frequencies at high signal-to-noise ratios in comparison to that of the adaptive line enhancer. The performance of adaptive lattice filters is then investigated for the second type of input signals, i.e., impulsive autoregressive processes with alpha-stable distributions . The concept of alpha-stable distributions is first introduced. We discuss that the stochastic gradient algorithm which performs desirable results for finite variance input signals (like frequency modulated signals in noise) does not perform a fast convergence for infinite variance stable processes (due to using the minimum mean-square error criterion). To deal with such problems, the concept of minimum dispersion criterion, fractional lower order moments, and recently-developed algorithms for stable processes are introduced. We then study the possibility of using the lattice structure for impulsive stable processes. Accordingly, two new algorithms including the least-mean P-norm lattice algorithm and its normalized version are proposed for lattice filters based on the fractional lower order moments. Simulation results show that using the proposed algorithms, faster convergence speeds are achieved for parameters estimation of autoregressive stable processes with low to moderate degrees of impulsiveness in comparison to many other algorithms. Also, we discuss the effect of impulsiveness of stable processes on generating some misalignment between the estimated parameters and the true values. Due to the infinite variance of stable processes, the performance of the proposed algorithms is only investigated using extensive computer simulations.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In recent years, multilevel converters are becoming more popular and attractive than traditional converters in high voltage and high power applications. Multilevel converters are particularly suitable for harmonic reduction in high power applications where semiconductor devices are not able to operate at high switching frequencies or in high voltage applications where multilevel converters reduce the need to connect devices in series to achieve high switch voltage ratings. This thesis investigated two aspects of multilevel converters: structure and control. The first part of this thesis focuses on inductance between a DC supply and inverter components in order to minimise loop inductance, which causes overvoltages and stored energy losses during switching. Three dimensional finite element simulations and experimental tests have been carried out for all sections to verify theoretical developments. The major contributions of this section of the thesis are as follows: The use of a large area thin conductor sheet with a rectangular cross section separated by dielectric sheets (planar busbar) instead of circular cross section wires, contributes to a reduction of the stray inductance. A number of approximate equations exist for calculating the inductance of a rectangular conductor but an assumption was made that the current density was uniform throughout the conductors. This assumption is not valid for an inverter with a point injection of current. A mathematical analysis of a planar bus bar has been performed at low and high frequencies and the inductance and the resistance values between the two points of the planar busbar have been determined. A new physical structure for a voltage source inverter with symmetrical planar bus bar structure called Reduced Layer Planar Bus bar, is proposed in this thesis based on the current point injection theory. This new type of planar busbar minimises the variation in stray inductance for different switching states. The reduced layer planar busbar is a new innovation in planar busbars for high power inverters with minimum separation between busbars, optimum stray inductance and improved thermal performances. This type of the planar busbar is suitable for high power inverters, where the voltage source is supported by several capacitors in parallel in order to provide a low ripple DC voltage during operation. A two layer planar busbar with different materials has been analysed theoretically in order to determine the resistance of bus bars during switching. Increasing the resistance of the planar busbar can gain a damping ratio between stray inductance and capacitance and affects the performance of current loop during switching. The aim of this section is to increase the resistance of the planar bus bar at high frequencies (during switching) and without significantly increasing the planar busbar resistance at low frequency (50 Hz) using the skin effect. This contribution shows a novel structure of busbar suitable for high power applications where high resistance is required at switching times. In multilevel converters there are different loop inductances between busbars and power switches associated with different switching states. The aim of this research is to consider all combinations of the switching states for each multilevel converter topology and identify the loop inductance for each switching state. Results show that the physical layout of the busbars is very important for minimisation of the loop inductance at each switch state. Novel symmetrical busbar structures are proposed for multilevel converters with diode-clamp and flying-capacitor topologies which minimise the worst case in stray inductance for different switching states. Overshoot voltages and thermal problems are considered for each topology to optimise the planar busbar structure. In the second part of the thesis, closed loop current techniques have been investigated for single and three phase multilevel converters. The aims of this section are to investigate and propose suitable current controllers such as hysteresis and predictive techniques for multilevel converters with low harmonic distortion and switching losses. This section of the thesis can be classified into three parts as follows: An optimum space vector modulation technique for a three-phase voltage source inverter based on a minimum-loss strategy is proposed. One of the degrees of freedom for optimisation of the space vector modulation is the selection of the zero vectors in the switching sequence. This new method improves switching transitions per cycle for a given level of distortion as the zero vector does not alternate between each sector. The harmonic spectrum and weighted total harmonic distortion for these strategies are compared and results show up to 7% weighted total harmonic distortion improvement over the previous minimum-loss strategy. The concept of SVM technique is a very convenient representation of a set of three-phase voltages or currents used for current control techniques. A new hysteresis current control technique for a single-phase multilevel converter with flying-capacitor topology is developed. This technique is based on magnitude and time errors to optimise the level change of converter output voltage. This method also considers how to improve unbalanced voltages of capacitors using voltage vectors in order to minimise switching losses. Logic controls require handling a large number of switches and a Programmable Logic Device (PLD) is a natural implementation for state transition description. The simulation and experimental results describe and verify the current control technique for the converter. A novel predictive current control technique is proposed for a three-phase multilevel converter, which controls the capacitors' voltage and load current with minimum current ripple and switching losses. The advantage of this contribution is that the technique can be applied to more voltage levels without significantly changing the control circuit. The three-phase five-level inverter with a pure inductive load has been implemented to track three-phase reference currents using analogue circuits and a programmable logic device.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Expert knowledge is valuable in many modelling endeavours, particularly where data is not extensive or sufficiently robust. In Bayesian statistics, expert opinion may be formulated as informative priors, to provide an honest reflection of the current state of knowledge, before updating this with new information. Technology is increasingly being exploited to help support the process of eliciting such information. This paper reviews the benefits that have been gained from utilizing technology in this way. These benefits can be structured within a six-step elicitation design framework proposed recently (Low Choy et al., 2009). We assume that the purpose of elicitation is to formulate a Bayesian statistical prior, either to provide a standalone expert-defined model, or for updating new data within a Bayesian analysis. We also assume that the model has been pre-specified before selecting the software. In this case, technology has the most to offer to: targeting what experts know (E2), eliciting and encoding expert opinions (E4), whilst enhancing accuracy (E5), and providing an effective and efficient protocol (E6). Benefits include: -providing an environment with familiar nuances (to make the expert comfortable) where experts can explore their knowledge from various perspectives (E2); -automating tedious or repetitive tasks, thereby minimizing calculation errors, as well as encouraging interaction between elicitors and experts (E5); -cognitive gains by educating users, enabling instant feedback (E2, E4-E5), and providing alternative methods of communicating assessments and feedback information, since experts think and learn differently; and -ensuring a repeatable and transparent protocol is used (E6).

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Ubiquitous access to patient medical records is an important aspect of caring for patient safety. Unavailability of sufficient medical information at the point-ofcare could possibly lead to a fatality. The U.S. Institute of Medicine has reported that between 44,000 and 98,000 people die each year due to medical errors, such as incorrect medication dosages, due to poor legibility in manual records, or delays in consolidating needed information to discern the proper intervention. In this research we propose employing emergent technologies such as Java SIM Cards (JSC), Smart Phones (SP), Next Generation Networks (NGN), Near Field Communications (NFC), Public Key Infrastructure (PKI), and Biometric Identification to develop a secure framework and related protocols for ubiquitous access to Electronic Health Records (EHR). A partial EHR contained within a JSC can be used at the point-of-care in order to help quick diagnosis of a patient’s problems. The full EHR can be accessed from an Electronic Health Records Centre (EHRC) when time and network availability permit. Moreover, this framework and related protocols enable patients to give their explicit consent to a doctor to access their personal medical data, by using their Smart Phone, when the doctor needs to see or update the patient’s medical information during an examination. Also our proposed solution would give the power to patients to modify the Access Control List (ACL) related to their EHRs and view their EHRs through their Smart Phone. Currently, very limited research has been done on using JSCs and similar technologies as a portable repository of EHRs or on the specific security issues that are likely to arise when JSCs are used with ubiquitous access to EHRs. Previous research is concerned with using Medicare cards, a kind of Smart Card, as a repository of medical information at the patient point-of-care. However, this imposes some limitations on the patient’s emergency medical care, including the inability to detect the patient’s location, to call and send information to an emergency room automatically, and to interact with the patient in order to get consent. The aim of our framework and related protocols is to overcome these limitations by taking advantage of the SIM card and the technologies mentioned above. Briefly, our framework and related protocols will offer the full benefits of accessing an up-to-date, precise, and comprehensive medical history of a patient, whilst its mobility will provide ubiquitous access to medical and patient information everywhere it is needed. The objective of our framework and related protocols is to automate interactions between patients, healthcare providers and insurance organisations, increase patient safety, improve quality of care, and reduce the costs.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The work presents a new approach to the problem of simultaneous localization and mapping - SLAM - inspired by computational models of the hippocampus of rodents. The rodent hippocampus has been extensively studied with respect to navigation tasks, and displays many of the properties of a desirable SLAM solution. RatSLAM is an implementation of a hippocampal model that can perform SLAM in real time on a real robot. It uses a competitive attractor network to integrate odometric information with landmark sensing to form a consistent representation of the environment. Experimental results show that RatSLAM can operate with ambiguous landmark information and recover from both minor and major path integration errors.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Greenhouse gas emissions from a well established, unfertilized tropical grass-legume pasture were monitored over two consecutive years using high resolution automatic sampling. Nitrous oxide emissions were highest during the summer months and were highly episodic, related more to the size and distribution of rain events than WFPS alone. Mean annual emissions were significantly higher during 2008 (5.7 ± 1.0 g N2O-N/ha/day) than 2007 (3.9 ± 0.4 and g N2O-N/ha/day) despite receiving nearly 500 mm less rain. Mean CO2 (28.2 ± 1.5 kg CO2 C/ha/day) was not significantly different (P < 0.01) between measurement years, emissions being highly dependent on temperature. A negative correlation between CO2 and WFPS at >70% indicated a threshold for soil conditions favouring denitrification. The use of automatic chambers for high resolution greenhouse gas sampling can greatly reduce emission estimation errors associated with temperature and WFPS changes.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

There are a number of gel dosimeter calibration methods in contemporary usage. The present study is a detailed Monte Carlo investigation into the accuracy of several calibration techniques. Results show that for most arrangements the dose to gel accurately reflects the dose to water, with the most accurate method involving the use of a large diameter flask of gel into which multiple small fields of varying dose are directed. The least accurate method was found to be that of a long test tube in a water phantom, coaxial with the beam. The large flask method is also the most straightforward and least likely to introduce errors during setup, though, to its detriment, the volume of gel required is much more than other methods.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This article examines Finnis' and Keown's claim that the intention/foresight distinction should be used as the basis for the lawfulness of withholding and withdrawing medical treatment, rather than the act/omission distinction which is currently used. I argue that whilst the intention/foresight distinction is sound and can apply to palliative pain relief hastening death, it cannot be applied to withholding and withdrawing medical treatment. Instead, the act/omission distinction remains the better basis for the lawfulness of withholding and withdrawal, and law reform is consequently unnecessary.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background, aim, and scope Urban motor vehicle fleets are a major source of particulate matter pollution, especially of ultrafine particles (diameters < 0.1 µm), and exposure to particulate matter has known serious health effects. A considerable body of literature is available on vehicle particle emission factors derived using a wide range of different measurement methods for different particle sizes, conducted in different parts of the world. Therefore the choice as to which are the most suitable particle emission factors to use in transport modelling and health impact assessments presented as a very difficult task. The aim of this study was to derive a comprehensive set of tailpipe particle emission factors for different vehicle and road type combinations, covering the full size range of particles emitted, which are suitable for modelling urban fleet emissions. Materials and methods A large body of data available in the international literature on particle emission factors for motor vehicles derived from measurement studies was compiled and subjected to advanced statistical analysis, to determine the most suitable emission factors to use in modelling urban fleet emissions. Results This analysis resulted in the development of five statistical models which explained 86%, 93%, 87%, 65% and 47% of the variation in published emission factors for particle number, particle volume, PM1, PM2.5 and PM10 respectively. A sixth model for total particle mass was proposed but no significant explanatory variables were identified in the analysis. From the outputs of these statistical models, the most suitable particle emission factors were selected. This selection was based on examination of the statistical robustness of the statistical model outputs, including consideration of conservative average particle emission factors with the lowest standard errors, narrowest 95% confidence intervals and largest sample sizes, and the explanatory model variables, which were Vehicle Type (all particle metrics), Instrumentation (particle number and PM2.5), Road Type (PM10) and Size Range Measured and Speed Limit on the Road (particle volume). Discussion A multiplicity of factors need to be considered in determining emission factors that are suitable for modelling motor vehicle emissions, and this study derived a set of average emission factors suitable for quantifying motor vehicle tailpipe particle emissions in developed countries. Conclusions The comprehensive set of tailpipe particle emission factors presented in this study for different vehicle and road type combinations enable the full size range of particles generated by fleets to be quantified, including ultrafine particles (measured in terms of particle number). These emission factors have particular application for regions which may have a lack of funding to undertake measurements, or insufficient measurement data upon which to derive emission factors for their region. Recommendations and perspectives In urban areas motor vehicles continue to be a major source of particulate matter pollution and of ultrafine particles. It is critical that in order to manage this major pollution source methods are available to quantify the full size range of particles emitted for traffic modelling and health impact assessments.