970 resultados para Error in essence


Relevância:

90.00% 90.00%

Publicador:

Resumo:

Each year, hospitalized patients experience 1.5 million preventable injuries from medication errors and hospitals incur an additional $3.5 billion in cost (Aspden, Wolcott, Bootman, & Cronenwatt; (2007). It is believed that error reporting is one way to learn about factors contributing to medication errors. And yet, an estimated 50% of medication errors go unreported. This period of medication error pre-reporting, with few exceptions, is underexplored. The literature focuses on error prevention and management, but lacks a description of the period of introspection and inner struggle over whether to report an error and resulting likelihood to report. Reporting makes a nurse vulnerable to reprimand, legal liability, and even threat to licensure. For some nurses this state may invoke a disparity between a person‘s belief about him or herself as a healer and the undeniable fact of the error.^ This study explored the medication error reporting experience. Its purpose was to inform nurses, educators, organizational leaders, and policy-makers about the medication error pre-reporting period, and to contribute to a framework for further investigation. From a better understanding of factors that contribute to or detract from the likelihood of an individual to report an error, interventions can be identified to help the nurse come to a psychologically healthy resolution and help increase reporting of error in order to learn from error and reduce the possibility of future similar error.^ The research question was: "What factors contribute to a nurse's likelihood to report an error?" The specific aims of the study were to: (1) describe participant nurses' perceptions of medication error reporting; (2) describe participant explanations of the emotional, cognitive, and physical reactions to making a medication error; (3) identify pre-reporting conditions that make it less likely for a nurse to report a medication error; and (4) identify pre-reporting conditions that make it more likely for a nurse to report a medication error.^ A qualitative research study was conducted to explore the medication error experience and in particular the pre-reporting period from the perspective of the nurse. A total of 54 registered nurses from a large private free-standing not-for-profit children's hospital in the southwestern United States participated in group interviews. The results describe the experience of the nurse as well as the physical, emotional, and cognitive responses to the realization of the commission of a medication error. The results also reveal factors that make it more and less likely to report a medication error.^ It is clear from this study that upon realization that he or she has made a medication error, a nurse's foremost concern is for the safety of the patient. Fear was also described by each group of nurses. The nurses described a fear of several things including physician reaction, manager reaction, peer reaction, as well as family reaction and possible lack of trust as a result. Another universal response was the description of a struggle with guilt, shame, imperfection, blaming oneself, and questioning one's competence.^

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Multi-center clinical trials are very common in the development of new drugs and devices. One concern in such trials, is the effect of individual investigational sites enrolling small numbers of patients on the overall result. Can the presence of small centers cause an ineffective treatment to appear effective when treatment-by-center interaction is not statistically significant?^ In this research, simulations are used to study the effect that centers enrolling few patients may have on the analysis of clinical trial data. A multi-center clinical trial with 20 sites is simulated to investigate the effect of a new treatment in comparison to a placebo treatment. Twelve of these 20 investigational sites are considered small, each enrolling less than four patients per treatment group. Three clinical trials are simulated with sample sizes of 100, 170 and 300. The simulated data is generated with various characteristics, one in which treatment should be considered effective and another where treatment is not effective. Qualitative interactions are also produced within the small sites to further investigate the effect of small centers under various conditions.^ Standard analysis of variance methods and the "sometimes-pool" testing procedure are applied to the simulated data. One model investigates treatment and center effect and treatment-by-center interaction. Another model investigates treatment effect alone. These analyses are used to determine the power to detect treatment-by-center interactions, and the probability of type I error.^ We find it is difficult to detect treatment-by-center interactions when only a few investigational sites enrolling a limited number of patients participate in the interaction. However, we find no increased risk of type I error in these situations. In a pooled analysis, when the treatment is not effective, the probability of finding a significant treatment effect in the absence of significant treatment-by-center interaction is well within standard limits of type I error. ^

Relevância:

90.00% 90.00%

Publicador:

Resumo:

A new site with Lateglacial palaeosols covered by 0.8 - 2.4 m thick aeolian sands is presented. The buried soils were subjected to multidisciplinary analyses (pedology, micromorphology, geochronology, dendrology, palynology, macrofossils). The buried soil cover comprises a catena from relatively dry ('Nano'-Podzol, Arenosol) via moist (Histic Gleysol, Gleysol) to wet conditions (Histosol). Dry soils are similar to the so-called Usselo soil, as described from sites in NW Europe and central Poland. The buried soil surface covers ca. 3.4 km**2. Pollen analyses date this surface into the late Aller0d. Due to a possible contamination by younger carbon, radiocarbon dates are too young. OSL dates indicate that the covering by aeolian sands most probably occurred during the Younger Dryas. Botanical analyses enables the reconstruction of a vegetation pattern typical for the late Allerod. Large wooden remains of pine and birch were recorded.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

During Ocean Drilling Program (ODP) Leg 189, five sites were drilled in the Tasmanian Seaway with the objective to constrain the paleoceanographic implications of the separation of Australia from Antarctica and to elucidate the paleoceanographic developments throughout the Neogene (Shipboard Scientific Party, 2001a, doi:10.2973/odp.proc.ir.189.101.2001). Sediments ranged from Cretaceous to Quaternary in age and provided the opportunity to describe the paleoenvironments in the Tasman Seaway prior to, during, and after the separation of Australia and Antarctica. This study will focus on postseparation distribution of calcareous nannofossils through the Miocene. Miocene sediments were recovered at all five Leg 189 sites, and four of these sites were studied in detail to determine the calcareous nannofossil biostratigraphy. Hole 1168A, located on the western Tasmanian margin, contains a fairly continuous Miocene record and could be easily zoned using the Okada and Bukry (1980, doi:10.1016/0377-8398(80)90016-X) zonation. Analysis of sediments from Hole 1169A, located on the western South Tasman Rise, was not included in this study, as the recovered sediments were highly disturbed and unsuitable for further analysis (Shipboard Scientific Party, 2001c, doi:10.2973/odp.proc.ir.189.104.2001). Holes 1170A, 1171A, and 1171C are located on the South Tasman Rise south of the modern Subtropical Front (STF). They revealed incomplete Miocene sequences intersected by an early Miocene and late Miocene hiatus and could only be roughly zoned using the Okada and Bukry zonation. Similarly, Hole 1172A, located on the East Tasman Plateau, contains a Miocene sequence with a hiatus in the early Miocene and in the late Miocene and could only be roughly zoned using the Okada and Bukry (1980, doi:10.1016/0377-8398(80)90016-X) zonation. This study aims to improve calcareous nannofossil biostratigraphic resolution in this sector of the mid to high southern latitudes. This paper will present abundance, preservation, and stratigraphic distribution of calcareous nannofossils through the Miocene and focus mainly on biozonal assignment.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

In recent decades, there has been an increasing interest in systems comprised of several autonomous mobile robots, and as a result, there has been a substantial amount of development in the eld of Articial Intelligence, especially in Robotics. There are several studies in the literature by some researchers from the scientic community that focus on the creation of intelligent machines and devices capable to imitate the functions and movements of living beings. Multi-Robot Systems (MRS) can often deal with tasks that are dicult, if not impossible, to be accomplished by a single robot. In the context of MRS, one of the main challenges is the need to control, coordinate and synchronize the operation of multiple robots to perform a specic task. This requires the development of new strategies and methods which allow us to obtain the desired system behavior in a formal and concise way. This PhD thesis aims to study the coordination of multi-robot systems, in particular, addresses the problem of the distribution of heterogeneous multi-tasks. The main interest in these systems is to understand how from simple rules inspired by the division of labor in social insects, a group of robots can perform tasks in an organized and coordinated way. We are mainly interested on truly distributed or decentralized solutions in which the robots themselves, autonomously and in an individual manner, select a particular task so that all tasks are optimally distributed. In general, to perform the multi-tasks distribution among a team of robots, they have to synchronize their actions and exchange information. Under this approach we can speak of multi-tasks selection instead of multi-tasks assignment, which means, that the agents or robots select the tasks instead of being assigned a task by a central controller. The key element in these algorithms is the estimation ix of the stimuli and the adaptive update of the thresholds. This means that each robot performs this estimate locally depending on the load or the number of pending tasks to be performed. In addition, it is very interesting the evaluation of the results in function in each approach, comparing the results obtained by the introducing noise in the number of pending loads, with the purpose of simulate the robot's error in estimating the real number of pending tasks. The main contribution of this thesis can be found in the approach based on self-organization and division of labor in social insects. An experimental scenario for the coordination problem among multiple robots, the robustness of the approaches and the generation of dynamic tasks have been presented and discussed. The particular issues studied are: Threshold models: It presents the experiments conducted to test the response threshold model with the objective to analyze the system performance index, for the problem of the distribution of heterogeneous multitasks in multi-robot systems; also has been introduced additive noise in the number of pending loads and has been generated dynamic tasks over time. Learning automata methods: It describes the experiments to test the learning automata-based probabilistic algorithms. The approach was tested to evaluate the system performance index with additive noise and with dynamic tasks generation for the same problem of the distribution of heterogeneous multi-tasks in multi-robot systems. Ant colony optimization: The goal of the experiments presented is to test the ant colony optimization-based deterministic algorithms, to achieve the distribution of heterogeneous multi-tasks in multi-robot systems. In the experiments performed, the system performance index is evaluated by introducing additive noise and dynamic tasks generation over time.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This paper focuses on the general problem of coordinating multiple robots. More specifically, it addresses the self-election of heterogeneous specialized tasks by autonomous robots. In this paper we focus on a specifically distributed or decentralized approach as we are particularly interested on decentralized solution where the robots themselves autonomously and in an individual manner, are responsible of selecting a particular task so that all the existing tasks are optimally distributed and executed. In this regard, we have established an experimental scenario to solve the corresponding multi-tasks distribution problem and we propose a solution using two different approaches by applying Ant Colony Optimization-based deterministic algorithms as well as Learning Automata-based probabilistic algorithms. We have evaluated the robustness of the algorithm, perturbing the number of pending loads to simulate the robot’s error in estimating the real number of pending tasks and also the dynamic generation of loads through time. The paper ends with a critical discussion of experimental results.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This paper focuses on the general problem of coordinating of multi-robot systems, more specifically, it addresses the self-election of heterogeneous and specialized tasks by autonomous robots. In this regard, it has proposed experimenting with two different techniques based chiefly on selforganization and emergence biologically inspired, by applying response threshold models as well as ant colony optimization. Under this approach it can speak of multi-tasks selection instead of multi-tasks allocation, that means, as the agents or robots select the tasks instead of being assigned a task by a central controller. The key element in these algorithms is the estimation of the stimuli and the adaptive update of the thresholds. This means that each robot performs this estimate locally depending on the load or the number of pending tasks to be performed. It has evaluated the robustness of the algorithms, perturbing the number of pending loads to simulate the robot’s error in estimating the real number of pending tasks and also the dynamic generation of loads through time. The paper ends with a critical discussion of experimental results.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This paper focuses on the general problem of coordinating multiple robots. More specifically, it addresses the self-selection of heterogeneous specialized tasks by autonomous robots. In this paper we focus on a specifically distributed or decentralized approach as we are particularly interested in a decentralized solution where the robots themselves autonomously and in an individual manner, are responsible for selecting a particular task so that all the existing tasks are optimally distributed and executed. In this regard, we have established an experimental scenario to solve the corresponding multi-task distribution problem and we propose a solution using two different approaches by applying Response Threshold Models as well as Learning Automata-based probabilistic algorithms. We have evaluated the robustness of the algorithms, perturbing the number of pending loads to simulate the robot’s error in estimating the real number of pending tasks and also the dynamic generation of loads through time. The paper ends with a critical discussion of experimental results.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

El propósito de esta tesis es la implementación de métodos eficientes de adaptación de mallas basados en ecuaciones adjuntas en el marco de discretizaciones de volúmenes finitos para mallas no estructuradas. La metodología basada en ecuaciones adjuntas optimiza la malla refinándola adecuadamente con el objetivo de mejorar la precisión de cálculo de un funcional de salida dado. El funcional suele ser una magnitud escalar de interés ingenieril obtenida por post-proceso de la solución, como por ejemplo, la resistencia o la sustentación aerodinámica. Usualmente, el método de adaptación adjunta está basado en una estimación a posteriori del error del funcional de salida mediante un promediado del residuo numérico con las variables adjuntas, “Dual Weighted Residual method” (DWR). Estas variables se obtienen de la solución del problema adjunto para el funcional seleccionado. El procedimiento habitual para introducir este método en códigos basados en discretizaciones de volúmenes finitos involucra la utilización de una malla auxiliar embebida obtenida por refinamiento uniforme de la malla inicial. El uso de esta malla implica un aumento significativo de los recursos computacionales (por ejemplo, en casos 3D el aumento de memoria requerida respecto a la que necesita el problema fluido inicial puede llegar a ser de un orden de magnitud). En esta tesis se propone un método alternativo basado en reformular la estimación del error del funcional en una malla auxiliar más basta y utilizar una técnica de estimación del error de truncación, denominada _ -estimation, para estimar los residuos que intervienen en el método DWR. Utilizando esta estimación del error se diseña un algoritmo de adaptación de mallas que conserva los ingredientes básicos de la adaptación adjunta estándar pero con un coste computacional asociado sensiblemente menor. La metodología de adaptación adjunta estándar y la propuesta en la tesis han sido introducidas en un código de volúmenes finitos utilizado habitualmente en la industria aeronáutica Europea. Se ha investigado la influencia de distintos parámetros numéricos que intervienen en el algoritmo. Finalmente, el método propuesto se compara con otras metodologías de adaptación de mallas y su eficiencia computacional se demuestra en una serie de casos representativos de interés aeronáutico. ABSTRACT The purpose of this thesis is the implementation of efficient grid adaptation methods based on the adjoint equations within the framework of finite volume methods (FVM) for unstructured grid solvers. The adjoint-based methodology aims at adapting grids to improve the accuracy of a functional output of interest, as for example, the aerodynamic drag or lift. The adjoint methodology is based on the a posteriori functional error estimation using the adjoint/dual-weighted residual method (DWR). In this method the error in a functional output can be directly related to local residual errors of the primal solution through the adjoint variables. These variables are obtained by solving the corresponding adjoint problem for the chosen functional. The common approach to introduce the DWR method within the FVM framework involves the use of an auxiliary embedded grid. The storage of this mesh demands high computational resources, i.e. over one order of magnitude increase in memory relative to the initial problem for 3D cases. In this thesis, an alternative methodology for adapting the grid is proposed. Specifically, the DWR approach for error estimation is re-formulated on a coarser mesh level using the _ -estimation method to approximate the truncation error. Then, an output-based adaptive algorithm is designed in such way that the basic ingredients of the standard adjoint method are retained but the computational cost is significantly reduced. The standard and the new proposed adjoint-based adaptive methodologies have been incorporated into a flow solver commonly used in the EU aeronautical industry. The influence of different numerical settings has been investigated. The proposed method has been compared against different grid adaptation approaches and the computational efficiency of the new method has been demonstrated on some representative aeronautical test cases.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

In this letter, we propose a novel method for unsupervised change detection (CD) in multitemporal Erreur Relative Globale Adimensionnelle de Synthese (ERGAS) satellite images by using the relative dimensionless global error in synthesis index locally. In order to obtain the change image, the index is calculated around a pixel neighborhood (3x3 window) processing simultaneously all the spectral bands available. With the objective of finding the binary change masks, six thresholding methods are selected. A comparison between the proposed method and the change vector analysis method is reported. The accuracy CD showed in the experimental results demonstrates the effectiveness of the proposed method.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Purpose: In this paper we study all settlements shown on the map of the Province of Madrid, sheet number 1 of AGE (Atlas Geográfico de España of Tomas Lopez 1804) and their correspondence with the current ones. This map is divided in to zones: Madrid and Almonacid de Zorita. Method: The steps followed in the methodology are as follow: 1. Geo-reference of maps with latitude and longitude framework. Move the historical longitude origin to the origin longitude of modern cartography. 2 Digitize of all population settlements or cities (97 on Madrid and 42 on Almonacid de Zorita), 3 Identify historic settlements or cities corresponding with current ones. 4. If the maps have the same orientation and scale, replace the coordinate transformation of historical settlements with a new one, by a translation in latitude and longitude equal to the calculated mean value of all ancient map points corresponding to the new. 5. Calculation of absolute accuracy of the two maps. 6 draw in the GIS, the settlements accuracy. Result: It was found that all AGE settlements have good correspondence with current, ie only 27 settlements lost in Madrid and 2 in Almonacid. The average accuracy is 2.3 and 5.7 km to Madrid and Almonacid de Zorita respectively. Discussion & Conclusion: The final accuracy map obtained shows that there is less error in the middle of the map. This study highlights the great work done by Tomas Lopez in performing this mapping without fieldwork. This demonstrates the great value that has been the work of Tomas Lopez in the history of cartography.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The assessment of the glacier thickness is one of the most widespread applications of radioglaciology, and is the basis for estimating the glacier volume. The accuracy of the measurement of ice thickness, the distribution of profiles over the glacier and the accuracy of the boundary delineation of the glacier are the most important factors determining the error in the evaluation of the glacier volume. The aim of this study is to get an accurate estimate of the error incurred in the estimate of glacier volume from GPR-retrieved ice-thickness data.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Basic engineering skills are not the only key to professional development, particularly as engineering problems are everyday more and more complex and multifaceted, hence requiring the implementation of larger multidisciplinary teams, in many cases working in an international context and in a continuously evolving environment. Therefore other outcomes, sometimes referred to as professional skills, are also necessary for our students, as most universities are already aware. In this study we try to methodically analyze the main strategies for the promotion of professional skills, mainly linked to actuations which directly affect students or teachers (and teaching methodologies) and which take advantage of the environment and available resources. From an initial list of 51 strategies (in essence aimed at promotion of different drivers of change, linked to students, teachers, environment and resources), we focus on the 11 drivers of change considered more important after an initial evaluation. Subsequently, a systematic analysis of the typical problems linked to these main drivers of change, enables us to find and formulate 12 major and usually repeated and unsolved problems. After selecting these typical problems, we put forward 25 different solutions, for short-term actuation, and discuss their effects, while bearing in mind our team’s experience, together with the information from the studies carried out by numerous teaching staff from other universities.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The performance of an amperometric biosensor, consisting of a subcutaneously implanted miniature (0.29 mm diameter, 5 × 10−4 cm2 mass transporting area), 90 s 10–90% rise/decay time glucose electrode, and an on-the-skin electrocardiogram Ag/AgCl electrode was tested in an unconstrained, naturally diabetic, brittle, type I, insulin-dependent chimpanzee. The chimpanzee was trained to wear on her wrist a small electronic package and to present her heel for capillary blood samples. In five sets of measurements, averaging 5 h each, 82 capillary blood samples were assayed, their concentrations ranging from 35 to 400 mg/dl. The current readings were translated to blood glucose concentration by assaying, at t = 1 h, one blood sample for each implanted sensor. The rms error in the correlation between the sensor-measured glucose concentration and that in capillary blood was 17.2%, 4.9% above the intrinsic 12.3% rms error of the Accu-Chek II reference, through which the illness of the chimpanzee was routinely managed. Linear regression analysis of the data points taken at t>1 h yielded the relationship (Accu-Chek) = 0.98 × (implanted sensor) + 4.2 mg/dl, r2 = 0.94. The capillary blood and the subcutaneous glucose concentrations were statistically indistinguishable when the rate of change was less than 1 mg/(dl⋅min). However, when the rate of decline exceeded 1.8 mg/(dl⋅min) after insulin injection, the subcutaneous glucose concentration was transiently higher.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

We studied the global and local ℳ-Z relation based on the first data available from the CALIFA survey (150 galaxies). This survey provides integral field spectroscopy of the complete optical extent of each galaxy (up to 2−3 effective radii), with a resolution high enough to separate individual H II regions and/or aggregations. About 3000 individual H II regions have been detected. The spectra cover the wavelength range between [OII]3727 and [SII]6731, with a sufficient signal-to-noise ratio to derive the oxygen abundance and star-formation rate associated with each region. In addition, we computed the integrated and spatially resolved stellar masses (and surface densities) based on SDSS photometric data. We explore the relations between the stellar mass, oxygen abundance and star-formation rate using this dataset. We derive a tight relation between the integrated stellar mass and the gas-phase abundance, with a dispersion lower than the one already reported in the literature (σ_Δlog (O/H) = 0.07 dex). Indeed, this dispersion is only slightly higher than the typical error derived for our oxygen abundances. However, we found no secondary relation with the star-formation rate other than the one induced by the primary relation of this quantity with the stellar mass. The analysis for our sample of ~3000 individual H II   regions confirms (i) a local mass-metallicity relation and (ii) the lack of a secondary relation with the star-formation rate. The same analysis was performed with similar results for the specific star-formation rate. Our results agree with the scenario in which gas recycling in galaxies, both locally and globally, is much faster than other typical timescales, such like that of gas accretion by inflow and/or metal loss due to outflows. In essence, late-type/disk-dominated galaxies seem to be in a quasi-steady situation, with a behavior similar to the one expected from an instantaneous recycling/closed-box model.