848 resultados para Errors and omission
Resumo:
In this action research study, I investigated the careless errors made by my seventh-grade mathematics students on their homework and tests. Beyond analyzing the types of careless errors and the frequency at which they were made, I also analyzed my students’ attitudes toward reviewing their work before they turn it in and self-reflection about the quality of work that they were producing. I found that many students did not know how to review their test before turning it in; no one had ever taught them how to do so. However, when students were given tools to help them with this task, they were able to make strides towards reducing the number of careless errors that they made and began to turn in high quality work that demonstrated their understanding of the content that had been taught. As a result of this research, I plan to teach my students how to go back over their homework and tests before turning them in. I also intend to continue to use the tools that I have produced to encourage students to self-reflect on the work that they have done. Assessment is such an important piece of educating my students and the careless errors made on these assessments needed to be addressed.
Resumo:
The flow around circular smooth fixed cylinder in a large range of Reynolds numbers is considered in this paper. In order to investigate this canonical case, we perform CFD calculations and apply verification & validation (V&V) procedures to draw conclusions regarding numerical error and, afterwards, assess the modeling errors and capabilities of this (U)RANS method to solve the problem. Eight Reynolds numbers between Re = 10 and Re 5 x 10(5) will be presented with, at least, four geometrically similar grids and five discretization in time for each case (when unsteady), together with strict control of iterative and round-off errors, allowing a consistent verification analysis with uncertainty estimation. Two-dimensional RANS, steady or unsteady, laminar or turbulent calculations are performed. The original 1994 k - omega SST turbulence model by Menter is used to model turbulence. The validation procedure is performed by comparing the numerical results with an extensive set of experimental results compiled from the literature. [DOI: 10.1115/1.4007571]
Resumo:
Abstract Background Considering the increasing use of polymyxins to treat infections due to multidrug resistant Gram-negative in many countries, it is important to evaluate different susceptibility testing methods to this class of antibiotic. Methods Susceptibility of 109 carbapenem-resistant P. aeruginosa to polymyxins was tested comparing broth microdilution (reference method), disc diffusion, and Etest using the new interpretative breakpoints of Clinical and Laboratory Standards Institute. Results Twenty-nine percent of isolates belonged to endemic clone and thus, these strains were excluded of analysis. Among 78 strains evaluated, only one isolate was resistant to polymyxin B by the reference method (MIC: 8.0 μg/mL). Very major and major error rates of 1.2% and 11.5% were detected comparing polymyxin B disc diffusion with the broth microdilution (reference method). Agreement within 1 twofold dilution between Etest and the broth microdilution were 33% for polymyxin B and 79.5% for colistin. One major error and 48.7% minor errors were found comparing polymyxin B Etest with broth microdilution and only 6.4% minor errors with colistin. The concordance between Etest and the broth microdilution (reference method) was respectively 100% for colistin and 90% for polymyxin B. Conclusion Resistance to polymyxins seems to be rare among hospital carbapenem-resistant P. aeruginosa isolates over a six-year period. Our results showed, using the new CLSI criteria, that the disc diffusion susceptibility does not report major errors (false-resistant results) for colistin. On the other hand, showed a high frequency of minor errors and 1 very major error for polymyxin B. Etest presented better results for colistin than polymyxin B. Until these results are reproduced with a large number of polymyxins-resistant P. aeruginosa isolates, susceptibility to polymyxins should be confirmed by a reference method.
Resumo:
Marine soft bottom systems show a high variability across multiple spatial and temporal scales. Both natural and anthropogenic sources of disturbance act together in affecting benthic sedimentary characteristics and species distribution. The description of such spatial variability is required to understand the ecological processes behind them. However, in order to have a better estimate of spatial patterns, methods that take into account the complexity of the sedimentary system are required. This PhD thesis aims to give a significant contribution both in improving the methodological approaches to the study of biological variability in soft bottom habitats and in increasing the knowledge of the effect that different process (both natural and anthropogenic) could have on the benthic communities of a large area in the North Adriatic Sea. Beta diversity is a measure of the variability in species composition, and Whittaker’s index has become the most widely used measure of beta-diversity. However, application of the Whittaker index to soft bottom assemblages of the Adriatic Sea highlighted its sensitivity to rare species (species recorded in a single sample). This over-weighting of rare species induces biased estimates of the heterogeneity, thus it becomes difficult to compare assemblages containing a high proportion of rare species. In benthic communities, the unusual large number of rare species is frequently attributed to a combination of sampling errors and insufficient sampling effort. In order to reduce the influence of rare species on the measure of beta diversity, I have developed an alternative index based on simple probabilistic considerations. It turns out that this probability index is an ordinary Michaelis-Menten transformation of Whittaker's index but behaves more favourably when species heterogeneity increases. The suggested index therefore seems appropriate when comparing patterns of complexity in marine benthic assemblages. Although the new index makes an important contribution to the study of biodiversity in sedimentary environment, it remains to be seen which processes, and at what scales, influence benthic patterns. The ability to predict the effects of ecological phenomena on benthic fauna highly depends on both spatial and temporal scales of variation. Once defined, implicitly or explicitly, these scales influence the questions asked, the methodological approaches and the interpretation of results. Problem often arise when representative samples are not taken and results are over-generalized, as can happen when results from small-scale experiments are used for resource planning and management. Such issues, although globally recognized, are far from been resolved in the North Adriatic Sea. This area is potentially affected by both natural (e.g. river inflow, eutrophication) and anthropogenic (e.g. gas extraction, fish-trawling) sources of disturbance. Although few studies in this area aimed at understanding which of these processes mainly affect macrobenthos, these have been conducted at a small spatial scale, as they were designated to examine local changes in benthic communities or particular species. However, in order to better describe all the putative processes occurring in the entire area, a high sampling effort performed at a large spatial scale is required. The sedimentary environment of the western part of the Adriatic Sea was extensively studied in this thesis. I have described, in detail, spatial patterns both in terms of sedimentary characteristics and macrobenthic organisms and have suggested putative processes (natural or of human origin) that might affect the benthic environment of the entire area. In particular I have examined the effect of off shore gas platforms on benthic diversity and tested their effect over a background of natural spatial variability. The results obtained suggest that natural processes in the North Adriatic such as river outflow and euthrophication show an inter-annual variability that might have important consequences on benthic assemblages, affecting for example their spatial pattern moving away from the coast and along a North to South gradient. Depth-related factors, such as food supply, light, temperature and salinity play an important role in explaining large scale benthic spatial variability (i.e., affecting both the abundance patterns and beta diversity). Nonetheless, more locally, effects probably related to an organic enrichment or pollution from Po river input has been observed. All these processes, together with few human-induced sources of variability (e.g. fishing disturbance), have a higher effect on macrofauna distribution than any effect related to the presence of gas platforms. The main effect of gas platforms is restricted mainly to small spatial scales and related to a change in habitat complexity due to a natural dislodgement or structure cleaning of mussels that colonize their legs. The accumulation of mussels on the sediment reasonably affects benthic infauna composition. All the components of the study presented in this thesis highlight the need to carefully consider methodological aspects related to the study of sedimentary habitats. With particular regards to the North Adriatic Sea, a multi-scale analysis along natural and anthopogenic gradients was useful for detecting the influence of all the processes affecting the sedimentary environment. In the future, applying a similar approach may lead to an unambiguous assessment of the state of the benthic community in the North Adriatic Sea. Such assessment may be useful in understanding if any anthropogenic source of disturbance has a negative effect on the marine environment, and if so, planning sustainable strategies for a proper management of the affected area.
Resumo:
This thesis is focused on the study of techniques that allow to have reliable transmission of multimedia content in streaming and broadcasting applications, targeting in particular video content. The design of efficient error-control mechanisms, to enhance video transmission systems reliability, has been addressed considering cross-layer and multi-layer/multi-dimensional channel coding techniques to cope with bit errors as well as packet erasures. Mechanisms for unequal time interleaving have been designed as a viable solution to reduce the impact of errors and erasures by acting on the time diversity of the data flow, thus enhancing robustness against correlated channel impairments. In order to account for the nature of the factors which affect the physical layer channel in the evaluation of FEC schemes performances, an ad-hoc error-event modeling has been devised. In addition, the impact of error correction/protection techniques on the quality perceived by the consumers of video services applications and techniques for objective/subjective quality evaluation have been studied. The applicability and value of the proposed techniques have been tested by considering practical constraints and requirements of real system implementations.
Resumo:
This thesis investigates interactive scene reconstruction and understanding using RGB-D data only. Indeed, we believe that depth cameras will still be in the near future a cheap and low-power 3D sensing alternative suitable for mobile devices too. Therefore, our contributions build on top of state-of-the-art approaches to achieve advances in three main challenging scenarios, namely mobile mapping, large scale surface reconstruction and semantic modeling. First, we will describe an effective approach dealing with Simultaneous Localization And Mapping (SLAM) on platforms with limited resources, such as a tablet device. Unlike previous methods, dense reconstruction is achieved by reprojection of RGB-D frames, while local consistency is maintained by deploying relative bundle adjustment principles. We will show quantitative results comparing our technique to the state-of-the-art as well as detailed reconstruction of various environments ranging from rooms to small apartments. Then, we will address large scale surface modeling from depth maps exploiting parallel GPU computing. We will develop a real-time camera tracking method based on the popular KinectFusion system and an online surface alignment technique capable of counteracting drift errors and closing small loops. We will show very high quality meshes outperforming existing methods on publicly available datasets as well as on data recorded with our RGB-D camera even in complete darkness. Finally, we will move to our Semantic Bundle Adjustment framework to effectively combine object detection and SLAM in a unified system. Though the mathematical framework we will describe does not restrict to a particular sensing technology, in the experimental section we will refer, again, only to RGB-D sensing. We will discuss successful implementations of our algorithm showing the benefit of a joint object detection, camera tracking and environment mapping.
Resumo:
In the past two decades the work of a growing portion of researchers in robotics focused on a particular group of machines, belonging to the family of parallel manipulators: the cable robots. Although these robots share several theoretical elements with the better known parallel robots, they still present completely (or partly) unsolved issues. In particular, the study of their kinematic, already a difficult subject for conventional parallel manipulators, is further complicated by the non-linear nature of cables, which can exert only efforts of pure traction. The work presented in this thesis therefore focuses on the study of the kinematics of these robots and on the development of numerical techniques able to address some of the problems related to it. Most of the work is focused on the development of an interval-analysis based procedure for the solution of the direct geometric problem of a generic cable manipulator. This technique, as well as allowing for a rapid solution of the problem, also guarantees the results obtained against rounding and elimination errors and can take into account any uncertainties in the model of the problem. The developed code has been tested with the help of a small manipulator whose realization is described in this dissertation together with the auxiliary work done during its design and simulation phases.
Resumo:
Modern imaging technologies, such as computed tomography (CT) techniques, represent a great challenge in forensic pathology. The field of forensics has experienced a rapid increase in the use of these new techniques to support investigations on critical cases, as indicated by the implementation of CT scanning by different forensic institutions worldwide. Advances in CT imaging techniques over the past few decades have finally led some authors to propose that virtual autopsy, a radiological method applied to post-mortem analysis, is a reliable alternative to traditional autopsy, at least in certain cases. The authors investigate the occurrence and the causes of errors and mistakes in diagnostic imaging applied to virtual autopsy. A case of suicide by a gunshot wound was submitted to full-body CT scanning before autopsy. We compared the first examination of sectional images with the autopsy findings and found a preliminary misdiagnosis in detecting a peritoneal lesion by gunshot wound that was due to radiologist's error. Then we discuss a new emerging issue related to the risk of diagnostic failure in virtual autopsy due to radiologist's error that is similar to what occurs in clinical radiology practice.
Resumo:
Patients' reports of safety-related events and perceptions of safety can be a valuable source for hospitals. Patients of eight acute care hospitals in Switzerland were surveyed for safety-related events and concerns for safety. In workshops with hospitals areas for improvement were analyzed and priorities for change identified. To evaluate the benefit of the approach, semi-structured interviews were conducted with hospital risk managers. 3,983 patients returned the survey (55% response rate). 21.4% reported at least one definite safety event, and the mean number of 'definite' incidents per patient was 0.31 (95% CI=0.29 to 0.34). 3.2% were very concerned and 14.7% were somewhat concerned about medical errors and safety. Having experienced a safety-related event, younger age, length of stay, poor health and a poor education increased the probability of reporting concerns. With some exceptions, results confirmed the hospitals' a priori expectations regarding the strengths and weaknesses of their institutions. Risk managers emphasized the usability of results for their work and the special value of referring to the patient's perspective at their home institutions. A considerable fraction of patients subjectively experiences safety-related events and is concerned about safety. Patient-generated data introduced a new quality into the discussion of safety issues within hospitals, and some expected that patients' experiences and concerns could affect patient volumes. Though the study is limited by the short time horizon and the lack of follow-up, the results suggest that the described approach is feasible and can serve as a supplemental tool for risk identification and management.
Resumo:
Economic theory distinguishes two concepts of utility: decision utility, objectively quantifiable by choices, and experienced utility, referring to the satisfaction by an obtainment. To date, experienced utility is typically measured with subjective ratings. This study intended to quantify experienced utility by global levels of neuronal activity. Neuronal activity was measured by means of electroencephalographic (EEG) responses to gain and omission of graded monetary rewards at the level of the EEG topography in human subjects. A novel analysis approach allowed approximating psychophysiological value functions for the experienced utility of monetary rewards. In addition, we identified the time windows of the event-related potentials (ERP) and the respective intracortical sources, in which variations in neuronal activity were significantly related to the value or valence of outcomes. Results indicate that value functions of experienced utility and regret disproportionally increase with monetary value, and thus contradict the compressing value functions of decision utility. The temporal pattern of outcome evaluation suggests an initial (∼250 ms) coarse evaluation regarding the valence, concurrent with a finer-grained evaluation of the value of gained rewards, whereas the evaluation of the value of omitted rewards emerges later. We hypothesize that this temporal double dissociation is explained by reward prediction errors. Finally, a late, yet unreported, reward-sensitive ERP topography (∼500 ms) was identified. The sources of these topographical covariations are estimated in the ventromedial prefrontal cortex, the medial frontal gyrus, the anterior and posterior cingulate cortex and the hippocampus/amygdala. The results provide important new evidence regarding “how,” “when,” and “where” the brain evaluates outcomes with different hedonic impact.
Resumo:
BACKGROUND: Assessment of lung volume (FRC) and ventilation inhomogeneities with ultrasonic flowmeter and multiple breath washout (MBW) has been used to provide important information about lung disease in infants. Sub-optimal adjustment of the mainstream molar mass (MM) signal for temperature and external deadspace may lead to analysis errors in infants with critically small tidal volume changes during breathing. METHODS: We measured expiratory temperature in human infants at 5 weeks of age and examined the influence of temperature and deadspace changes on FRC results with computer simulation modeling. A new analysis method with optimized temperature and deadspace settings was then derived, tested for robustness to analysis errors and compared with the previously used analysis methods. RESULTS: Temperature in the facemask was higher and variations of deadspace volumes larger than previously assumed. Both showed considerable impact upon FRC and LCI results with high variability when obtained with the previously used analysis model. Using the measured temperature we optimized model parameters and tested a newly derived analysis method, which was found to be more robust to variations in deadspace. Comparison between both analysis methods showed systematic differences and a wide scatter. CONCLUSION: Corrected deadspace and more realistic temperature assumptions improved the stability of the analysis of MM measurements obtained by ultrasonic flowmeter in infants. This new analysis method using the only currently available commercial ultrasonic flowmeter in infants may help to improve stability of the analysis and further facilitate assessment of lung volume and ventilation inhomogeneities in infants.
Resumo:
This third edition essentially compares with the 2nd one, but has been improved by correction of errors and by a rearrangement and minor expansion of the sections referring to recurrent networks. These changes hopefully allow for an easier comprehension of the essential aspects of this important domain that has received growing attention during the last years.
Resumo:
Neural Networks as Cybernetic Systems is a textbox that combines classical systems theory with artificial neural network technology. This third edition essentially compares with the 2nd one, but has been improved by correction of errors and by a rearrangement and minor expansion of the sections referring to recurrent networks. These changes hopefully allow for an easier comprehension of the essential aspects of this important domain that has received growing attention during the last years.
Resumo:
eural Networks as Cybernetic Systems is a textbox that combines classical systems theory with artificial neural network technology. This third edition essentially compares with the 2nd one, but has been improved by correction of errors and by a rearrangement and minor expansion of the sections referring to recurrent networks. These changes hopefully allow for an easier comprehension of the essential aspects of this important domain that has received growing attention during the last years.
Resumo:
We introduce an algorithm (called REDFITmc2) for spectrum estimation in the presence of timescale errors. It is based on the Lomb-Scargle periodogram for unevenly spaced time series, in combination with the Welch's Overlapped Segment Averaging procedure, bootstrap bias correction and persistence estimation. The timescale errors are modelled parametrically and included in the simulations for determining (1) the upper levels of the spectrum of the red-noise AR(1) alternative and (2) the uncertainty of the frequency of a spectral peak. Application of REDFITmc2 to ice core and stalagmite records of palaeoclimate allowed a more realistic evaluation of spectral peaks than when ignoring this source of uncertainty. The results support qualitatively the intuition that stronger effects on the spectrum estimate (decreased detectability and increased frequency uncertainty) occur for higher frequencies. The surplus information brought by algorithm REDFITmc2 is that those effects are quantified. Regarding timescale construction, not only the fixpoints, dating errors and the functional form of the age-depth model play a role. Also the joint distribution of all time points (serial correlation, stratigraphic order) determines spectrum estimation.