149 resultados para Errors and omission
Resumo:
The aim of this study was to identify and describe the types of errors in clinical reasoning that contribute to poor diagnostic performance at different levels of medical training and experience. Three cohorts of subjects, second- and fourth- (final) year medical students and a group of general practitioners, completed a set of clinical reasoning problems. The responses of those whose scores fell below the 25th centile were analysed to establish the stage of the clinical reasoning process - identification of relevant information, interpretation or hypothesis generation - at which most errors occurred and whether this was dependent on problem difficulty and level of medical experience. Results indicate that hypothesis errors decrease as expertise increases but that identification and interpretation errors increase. This may be due to inappropriate use of pattern recognition or to failure of the knowledge base. Furthermore, although hypothesis errors increased in line with problem difficulty, identification and interpretation errors decreased. A possible explanation is that as problem difficulty increases, subjects at all levels of expertise are less able to differentiate between relevant and irrelevant clinical features and so give equal consideration to all information contained within a case. It is concluded that the development of clinical reasoning in medical students throughout the course of their pre-clinical and clinical education may be enhanced by both an analysis of the clinical reasoning process and a specific focus on each of the stages at which errors commonly occur.
Resumo:
Person tracking systems to date have either relied on motion detection or optical flow as a basis for person detection and tracking. As yet, systems have not been developed that utilise both these techniques. We propose a person tracking system that uses both, made possible by a novel hybrid optical flow-motion detection technique that we have developed. This provides the system with two methods of person detection, helping to avoid missed detections and the need to predict position, which can lead to errors in tracking and mistakes when handling occlusion situations. Our results show that our system is able to track people accurately, with an average error less than four pixels, and that our system outperforms the current CAVIAR benchmark system.
Error, Bias, and Long-Branch Attraction in Data for Two Chloroplast Photosystem Genes in Seed Plants
Resumo:
Sequences of two chloroplast photosystem genes, psaA and psbB, together comprising about 3,500 bp, were obtained for all five major groups of extant seed plants and several outgroups among other vascular plants. Strongly supported, but significantly conflicting, phylogenetic signals were obtained in parsimony analyses from partitions of the data into first and second codon positions versus third positions. In the former, both genes agreed on a monophyletic gymnosperms, with Gnetales closely related to certain conifers. In the latter, Gnetales are inferred to be the sister group of all other seed plants, with gymnosperms paraphyletic. None of the data supported the modern ‘‘anthophyte hypothesis,’’ which places Gnetales as the sister group of flowering plants. A series of simulation studies were undertaken to examine the error rate for parsimony inference. Three kinds of errors were examined: random error, systematic bias (both properties of finite data sets), and statistical inconsistency owing to long-branch attraction (an asymptotic property). Parsimony reconstructions were extremely biased for third-position data for psbB. Regardless of the true underlying tree, a tree in which Gnetales are sister to all other seed plants was likely to be reconstructed for these data. None of the combinations of genes or partitions permits the anthophyte tree to be reconstructed with high probability. Simulations of progressively larger data sets indicate the existence of long-branch attraction (statistical inconsistency) for third-position psbB data if either the anthophyte tree or the gymnosperm tree is correct. This is also true for the anthophyte tree using either psaA third positions or psbB first and second positions. A factor contributing to bias and inconsistency is extremely short branches at the base of the seed plant radiation, coupled with extremely high rates in Gnetales and nonseed plant outgroups. M. J. Sanderson,* M. F. Wojciechowski,*† J.-M. Hu,* T. Sher Khan,* and S. G. Brady
Resumo:
This paper reports on the performance of 58 11 to 12-year-olds on a spatial visualization task and a spatial orientation task. The students completed these tasks and explained their thinking during individual interviews. The qualitative data were analysed to inform pedagogical content knowledge for spatial activities. The study revealed that “matching” or “matching and eliminating” were the typical strategies that students employed on these spatial tasks. However, errors in making associations between parts of the same or different shapes were noted. Students also experienced general difficulties with visual memory and language use to explain their thinking. The students’ specific difficulties in spatial visualization related to obscured items, the perspective used, and the placement and orientation of shapes.
Resumo:
Risks and uncertainties are inevitable in engineering projects and infrastructure investments. Decisions about investment in infrastructure such as for maintenance, rehabilitation and construction works can pose risks, and may generate significant impacts on social, cultural, environmental and other related issues. This report presents the results of a literature review of current practice in identifying, quantifying and managing risks and predicting impacts as part of the planning and assessment process for infrastructure investment proposals. In assessing proposals for investment in infrastructure, it is necessary to consider social, cultural and environmental risks and impacts to the overall community, as well as financial risks to the investor. The report defines and explains the concept of risk and uncertainty, and describes the three main methodology approaches to the analysis of risk and uncertainty in investment planning for infrastructure, viz examining a range of scenarios or options, sensitivity analysis, and a statistical probability approach, listed here in order of increasing merit and complexity. Forecasts of costs, benefits and community impacts of infrastructure are recognised as central aspects of developing and assessing investment proposals. Increasingly complex modelling techniques are being used for investment evaluation. The literature review identified forecasting errors as the major cause of risk. The report contains a summary of the broad nature of decision-making tools used by governments and other organisations in Australia, New Zealand, Europe and North America, and shows their overall approach to risk assessment in assessing public infrastructure proposals. While there are established techniques to quantify financial and economic risks, quantification is far less developed for political, social and environmental risks and impacts. The report contains a summary of the broad nature of decision-making tools used by governments and other organisations in Australia, New Zealand, Europe and North America, and shows their overall approach to risk assessment in assessing public infrastructure proposals. While there are established techniques to quantify financial and economic risks, quantification is far less developed for political, social and environmental risks and impacts. For risks that cannot be readily quantified, assessment techniques commonly include classification or rating systems for likelihood and consequence. The report outlines the system used by the Australian Defence Organisation and in the Australian Standard on risk management. After each risk is identified and quantified or rated, consideration can be given to reducing the risk, and managing any remaining risk as part of the scope of the project. The literature review identified use of risk mapping techniques by a North American chemical company and by the Australian Defence Organisation. This literature review has enabled a risk assessment strategy to be developed, and will underpin an examination of the feasibility of developing a risk assessment capability using a probability approach.
Resumo:
Key topics: Since the birth of the Open Source movement in the mid-80's, open source software has become more and more widespread. Amongst others, the Linux operating system, the Apache web server and the Firefox internet explorer have taken substantial market shares to their proprietary competitors. Open source software is governed by particular types of licenses. As proprietary licenses only allow the software's use in exchange for a fee, open source licenses grant users more rights like the free use, free copy, free modification and free distribution of the software, as well as free access to the source code. This new phenomenon has raised many managerial questions: organizational issues related to the system of governance that underlie such open source communities (Raymond, 1999a; Lerner and Tirole, 2002; Lee and Cole 2003; Mockus et al. 2000; Tuomi, 2000; Demil and Lecocq, 2006; O'Mahony and Ferraro, 2007;Fleming and Waguespack, 2007), collaborative innovation issues (Von Hippel, 2003; Von Krogh et al., 2003; Von Hippel and Von Krogh, 2003; Dahlander, 2005; Osterloh, 2007; David, 2008), issues related to the nature as well as the motivations of developers (Lerner and Tirole, 2002; Hertel, 2003; Dahlander and McKelvey, 2005; Jeppesen and Frederiksen, 2006), public policy and innovation issues (Jullien and Zimmermann, 2005; Lee, 2006), technological competitions issues related to standard battles between proprietary and open source software (Bonaccorsi and Rossi, 2003; Bonaccorsi et al. 2004, Economides and Katsamakas, 2005; Chen, 2007), intellectual property rights and licensing issues (Laat 2005; Lerner and Tirole, 2005; Gambardella, 2006; Determann et al., 2007). A major unresolved issue concerns open source business models and revenue capture, given that open source licenses imply no fee for users. On this topic, articles show that a commercial activity based on open source software is possible, as they describe different possible ways of doing business around open source (Raymond, 1999; Dahlander, 2004; Daffara, 2007; Bonaccorsi and Merito, 2007). These studies usually look at open source-based companies. Open source-based companies encompass a wide range of firms with different categories of activities: providers of packaged open source solutions, IT Services&Software Engineering firms and open source software publishers. However, business models implications are different for each of these categories: providers of packaged solutions and IT Services&Software Engineering firms' activities are based on software developed outside their boundaries, whereas commercial software publishers sponsor the development of the open source software. This paper focuses on open source software publishers' business models as this issue is even more crucial for this category of firms which take the risk of investing in the development of the software. Literature at last identifies and depicts only two generic types of business models for open source software publishers: the business models of ''bundling'' (Pal and Madanmohan, 2002; Dahlander 2004) and the dual licensing business models (Välimäki, 2003; Comino and Manenti, 2007). Nevertheless, these business models are not applicable in all circumstances. Methodology: The objectives of this paper are: (1) to explore in which contexts the two generic business models described in literature can be implemented successfully and (2) to depict an additional business model for open source software publishers which can be used in a different context. To do so, this paper draws upon an explorative case study of IdealX, a French open source security software publisher. This case study consists in a series of 3 interviews conducted between February 2005 and April 2006 with the co-founder and the business manager. It aims at depicting the process of IdealX's search for the appropriate business model between its creation in 2000 and 2006. This software publisher has tried both generic types of open source software publishers' business models before designing its own. Consequently, through IdealX's trials and errors, I investigate the conditions under which such generic business models can be effective. Moreover, this study describes the business model finally designed and adopted by IdealX: an additional open source software publisher's business model based on the principle of ''mutualisation'', which is applicable in a different context. Results and implications: Finally, this article contributes to ongoing empirical work within entrepreneurship and strategic management on open source software publishers' business models: it provides the characteristics of three generic business models (the business model of bundling, the dual licensing business model and the business model of mutualisation) as well as conditions under which they can be successfully implemented (regarding the type of product developed and the competencies of the firm). This paper also goes further into the traditional concept of business model used by scholars in the open source related literature. In this article, a business model is not only considered as a way of generating incomes (''revenue model'' (Amit and Zott, 2001)), but rather as the necessary conjunction of value creation and value capture, according to the recent literature about business models (Amit and Zott, 2001; Chresbrough and Rosenblum, 2002; Teece, 2007). Consequently, this paper analyses the business models from these two components' point of view.
Resumo:
Having flexible notions of the unit (e.g., 26 ones can be thought of as 2.6 tens, 1 ten 16 ones, 260 tenths, etc.) should be a major focus of elementary mathematics education. However, often these powerful notions are relegated to computations where the major emphasis is on "getting the right answer" thus procedural knowledge rather than conceptual knowledge becomes the primary focus. This paper reports on 22 high-performing students' reunitising processes ascertained from individual interviews on tasks requiring unitising, reunitising and regrouping; errors were categorised to depict particular thinking strategies. The results show that, even for high-performing students, regrouping is a cognitively complex task. This paper analyses this complexity and draws inferences for teaching.
Resumo:
Surveillance networks are typically monitored by a few people, viewing several monitors displaying the camera feeds. It is then very difficult for a human operator to effectively detect events as they happen. Recently, computer vision research has begun to address ways to automatically process some of this data, to assist human operators. Object tracking, event recognition, crowd analysis and human identification at a distance are being pursued as a means to aid human operators and improve the security of areas such as transport hubs. The task of object tracking is key to the effective use of more advanced technologies. To recognize an event people and objects must be tracked. Tracking also enhances the performance of tasks such as crowd analysis or human identification. Before an object can be tracked, it must be detected. Motion segmentation techniques, widely employed in tracking systems, produce a binary image in which objects can be located. However, these techniques are prone to errors caused by shadows and lighting changes. Detection routines often fail, either due to erroneous motion caused by noise and lighting effects, or due to the detection routines being unable to split occluded regions into their component objects. Particle filters can be used as a self contained tracking system, and make it unnecessary for the task of detection to be carried out separately except for an initial (often manual) detection to initialise the filter. Particle filters use one or more extracted features to evaluate the likelihood of an object existing at a given point each frame. Such systems however do not easily allow for multiple objects to be tracked robustly, and do not explicitly maintain the identity of tracked objects. This dissertation investigates improvements to the performance of object tracking algorithms through improved motion segmentation and the use of a particle filter. A novel hybrid motion segmentation / optical flow algorithm, capable of simultaneously extracting multiple layers of foreground and optical flow in surveillance video frames is proposed. The algorithm is shown to perform well in the presence of adverse lighting conditions, and the optical flow is capable of extracting a moving object. The proposed algorithm is integrated within a tracking system and evaluated using the ETISEO (Evaluation du Traitement et de lInterpretation de Sequences vidEO - Evaluation for video understanding) database, and significant improvement in detection and tracking performance is demonstrated when compared to a baseline system. A Scalable Condensation Filter (SCF), a particle filter designed to work within an existing tracking system, is also developed. The creation and deletion of modes and maintenance of identity is handled by the underlying tracking system; and the tracking system is able to benefit from the improved performance in uncertain conditions arising from occlusion and noise provided by a particle filter. The system is evaluated using the ETISEO database. The dissertation then investigates fusion schemes for multi-spectral tracking systems. Four fusion schemes for combining a thermal and visual colour modality are evaluated using the OTCBVS (Object Tracking and Classification in and Beyond the Visible Spectrum) database. It is shown that a middle fusion scheme yields the best results and demonstrates a significant improvement in performance when compared to a system using either mode individually. Findings from the thesis contribute to improve the performance of semi-automated video processing and therefore improve security in areas under surveillance.
Resumo:
Purpose: To determine (a) the effect of different sunglass tint colorations on traffic signal detection and recognition for color normal and color deficient observers, and (b) the adequacy of coloration requirements in current sunglass standards. Methods: Twenty color-normals and 49 color-deficient males performed a tracking task while wearing sunglasses of different colorations (clear, gray, green, yellow-green, yellow-brown, red-brown). At random intervals, simulated traffic light signals were presented against a white background at 5° to the right or left and observers were instructed to identify signal color (red/yellow/green) by pressing a response button as quickly as possible; response times and response errors were recorded. Results: Signal color and sunglass tint had significant effects on response times and error rates (p < 0.05), with significant between-color group differences and interaction effects. Response times for color deficient people were considerably slower than color normals for both red and yellow signals for all sunglass tints, but for green signals they were only noticeably slower with the green and yellow-green lenses. For most of the color deficient groups, there were recognition errors for yellow signals combined with the yellow-green and green tints. In addition, deuteranopes had problems for red signals combined with red-brown and yellow-brown tints, and protanopes had problems for green signals combined with the green tint and for red signals combined with the red-brown tint. Conclusions: Many sunglass tints currently permitted for drivers and riders cause a measurable decrement in the ability of color deficient observers to detect and recognize traffic signals. In general, combinations of signals and sunglasses of similar colors are of particular concern. This is prima facie evidence of a risk in the use of these tints for driving and cautions against the relaxation of coloration limits in sunglasses beyond those represented in the study.
Resumo:
OBJECTIVES: To quantify the driving difficulties of older adults using a detailed assessment of driving performance and to link this with self-reported retrospective and prospective crashes. DESIGN: Prospective cohort study. SETTING: On-road driving assessment. PARTICIPANTS: Two hundred sixty-seven community-living adults aged 70 to 88 randomly recruited through the electoral roll. MEASUREMENTS: Performance on a standardized measure of driving performance. RESULTS: Lane positioning, approach, and blind spot monitoring were the most common error types, and errors occurred most frequently in situations involving merging and maneuvering. Drivers reporting more retrospective or prospective crashes made significantly more driving errors. Driver instructor interventions during self-navigation (where the instructor had to brake or take control of the steering to avoid an accident) were significantly associated with higher retrospective and prospective crashes; every instructor intervention almost doubled prospective crash risk. CONCLUSION: These findings suggest that on-road driving assessment provides useful information on older driver difficulties, with the self-directed component providing the most valuable information.
Resumo:
The worldwide organ shortage occurs despite people’s positive organ donation attitudes. The discrepancy between attitudes and behaviour is evident in Australia particularly, with widespread public support for organ donation but low donation and communication rates. This problem is compounded further by the paucity of theoretically based research to improve our understanding of people’s organ donation decisions. This program of research contributes to our knowledge of individual decision making processes for three aspects of organ donation: (1) posthumous (upon death) donation, (2) living donation (to a known and unknown recipient), and (3) providing consent for donation by communicating donation wishes on an organ donor consent register (registering) and discussing the donation decision with significant others (discussing). The research program used extended versions of the Theory of Planned Behaviour (TPB) and the Prototype/Willingness Model (PWM), incorporating additional influences (moral norm, self-identity, organ recipient prototypes), to explicate the relationship between people’s positive attitudes and low rates of organ donation behaviours. Adopting the TPB and PWM (and their extensions) as a theoretical basis overcomes several key limitations of the extant organ donation literature including the often atheoretical nature of organ donation research, thefocus on individual difference factors to construct organ donor profiles and the omission of important psychosocial influences (e.g., control perceptions, moral values) that may impact on people’s decision-making in this context. In addition, the use of the TPB and PWM adds further to our understanding of the decision making process for communicating organ donation wishes. Specifically, the extent to which people’s registering and discussing decisions may be explained by a reasoned and/or a reactive decision making pathway is examined (Stage 3) with the novel application of the TPB augmented with the social reaction pathway in the PWM. This program of research was conducted in three discrete stages: a qualitative stage (Stage 1), a quantitative stage with extended models (Stage 2), and a quantitative stage with augmented models (Stage 3). The findings of the research program are reported in nine papers which are presented according to the three aspects of organ donation examined (posthumous donation, living donation, and providing consent for donation by registering or discussing the donation preference). Stage One of the research program comprised qualitative focus groups/interviews with university students and community members (N = 54) (Papers 1 and 2). Drawing broadly on the TPB framework (Paper 1), content analysed responses revealed people’s commonly held beliefs about the advantages and disadvantages (e.g., prolonging/saving life), important people or groups (e.g., family), and barriers and motivators (e.g., a family’s objection to donation), related to living and posthumous organ donation. Guided by a PWM perspective, Paper Two identified people’s commonly held perceptions of organ donors (e.g., altruistic and giving), non-donors (e.g., self-absorbed and unaware), and transplant recipients (e.g., unfortunate, and in some cases responsible/blameworthy for their predicament). Stage Two encompassed quantitative examinations of people’s decision makingfor living (Papers 3 and 4) and posthumous (Paper 5) organ donation, and for registering and discussing donation wishes (Papers 6 to 8) to test extensions to both the TPB and PWM. Comparisons of health students’ (N = 487) motivations and willingness for living related and anonymous donation (Paper 3) revealed that a person’s donor identity, attitude, past blood donation, and knowing a posthumous donor were four common determinants of willingness, with the results highlighting students’ identification as a living donor as an important motive. An extended PWM is presented in Papers Four and Five. University students’ (N = 284) willingness for living related and anonymous donation was tested in Paper Four with attitude, subjective norm, donor prototype similarity, and moral norm (but not donor prototype favourability) predicting students’ willingness to donate organs in both living situations. Students’ and community members’ (N = 471) posthumous organ donation willingness was assessed in Paper Five with attitude, subjective norm, past behaviour, moral norm, self-identity, and prior blood donation all significantly directly predicting posthumous donation willingness, with only an indirect role for organ donor prototype evaluations. The results of two studies examining people’s decisions to register and/or discuss their organ donation wishes are reported in Paper Six. People’s (N = 24) commonly held beliefs about communicating their organ donation wishes were explored initially in a TPB based qualitative elicitation study. The TPB belief determinants of intentions to register and discuss the donation preference were then assessed for people who had not previously communicated their donation wishes (N = 123). Behavioural and normative beliefs were important determinants of registering and discussing intentions; however, control beliefs influenced people’s registering intentions only. Paper Seven represented the first empirical test of the role of organ transplant recipient prototypes (i.e., perceptions of organ transplant recipients) in people’s (N = 465) decisions to register consent for organ donation. Two factors, Substance Use and Responsibility, were identified and Responsibility predicted people’s organ donor registration status. Results demonstrated that unregistered respondents were the most likely to evaluate transplant recipients negatively. Paper Eight established the role of organ donor prototype evaluations, within an extended TPB model, in predicting students’ and community members’ registering (n = 359) and discussing (n = 282) decisions. Results supported the utility of an extended TPB and suggested a role for donor prototype evaluations in predicting people’s discussing intentions only. Strong intentions to discuss donation wishes increased the likelihood that respondents reported discussing their decision 1-month later. Stage Three of the research program comprised an examination of augmented models (Paper 9). A test of the TPB augmented with elements from the social reaction pathway in the PWM, and extensions to these models was conducted to explore whether people’s registering (N = 339) and discussing (N = 315) decisions are explained via a reasoned (intention) and/or social reaction (willingness) pathway. Results suggested that people’s decisions to communicate their organ donation wishes may be better explained via the reasoned pathway, particularly for registering consent; however, discussing also involves reactive elements. Overall, the current research program represents an important step toward clarifying the relationship between people’s positive organ donation attitudes but low rates of organ donation and communication behaviours. Support has been demonstrated for the use of extensions to two complementary theories, the TPB and PWM, which can inform future research aiming to explicate further the organ donation attitude-behaviour relationship. The focus on a range of organ donation behaviours enables the identification of key targets for future interventions encouraging people’s posthumous and living donation decisions, and communication of their organ donation preference.
Resumo:
Object tracking systems require accurate segmentation of the objects from the background for effective tracking. Motion segmentation or optical flow can be used to segment incoming images. Whilst optical flow allows multiple moving targets to be separated based on their individual velocities, optical flow techniques are prone to errors caused by changing lighting and occlusions, both common in a surveillance environment. Motion segmentation techniques are more robust to fluctuating lighting and occlusions, but don't provide information on the direction of the motion. In this paper we propose a combined motion segmentation/optical flow algorithm for use in object tracking. The proposed algorithm uses the motion segmentation results to inform the optical flow calculations and ensure that optical flow is only calculated in regions of motion, and improve the performance of the optical flow around the edge of moving objects. Optical flow is calculated at pixel resolution and tracking of flow vectors is employed to improve performance and detect discontinuities, which can indicate the location of overlaps between objects. The algorithm is evaluated by attempting to extract a moving target within the flow images, given expected horizontal and vertical movement (i.e. the algorithms intended use for object tracking). Results show that the proposed algorithm outperforms other widely used optical flow techniques for this surveillance application.
Resumo:
This presentation outlines key aspects of public policy in broad terms insofar as they relate to establishment, implementation and compliance with legal measurement standards. It refers in particular to traceability of a legal measurement unit from its source in a single international standard as a compliance issue. It comments on accreditation of legal measurement and liability concerned with errors in measurement.
Resumo:
Secondary tasks such as cell phone calls or interaction with automated speech dialog systems (SDSs) increase the driver’s cognitive load as well as the probability of driving errors. This study analyzes speech production variations due to cognitive load and emotional state of drivers in real driving conditions. Speech samples were acquired from 24 female and 17 male subjects (approximately 8.5 h of data) while talking to a co-driver and communicating with two automated call centers, with emotional states (neutral, negative) and the number of necessary SDS query repetitions also labeled. A consistent shift in a number of speech production parameters (pitch, first format center frequency, spectral center of gravity, spectral energy spread, and duration of voiced segments) was observed when comparing SDS interaction against co-driver interaction; further increases were observed when considering negative emotion segments and the number of requested SDS query repetitions. A mel frequency cepstral coefficient based Gaussian mixture classifier trained on 10 male and 10 female sessions provided 91% accuracy in the open test set task of distinguishing co-driver interactions from SDS interactions, suggesting—together with the acoustic analysis—that it is possible to monitor the level of driver distraction directly from their speech.