942 resultados para errors and erasures decoding
Resumo:
Anàlisi de la toponímia precatalana d'Eivissa i Formentera, partint d'un corpus prou ampli dels topònims actuals, i revisió del tractament que en fa l'
Resumo:
The aim of this study is to describe a newly implemented haemovigilance system in a general university hospital. We present a series of short cases, highlighting particular aspects of the reports, and an overview of all reported incidents between 1999 and 2001. Incidents related to transfusion of blood products were reported by the clinicians using a standard preformatted form, giving a synopsis of the incident. After analysis, we distinguished, on the one hand, transfusion reactions, that are transfusions which engendered signs or symptoms, and, on the other hand, the incidents where management errors and/or dysfunctions took place. Over 3 years, 233 incidents were reported, corresponding to 4.2 events for 1000 blood products delivered. Of the 233, 198 (85%) were acute transfusion reactions and 35 (15%) were management errors and/or dysfunctions. Platelet units gave rise to statistically (P < 0.001) more transfusion reactions (10.7 per thousand ) than red blood cells (3.5 per thousand ) and fresh frozen plasma (0.8 per thousand ), particularly febrile nonhaemolytic transfusion reactions and allergic reactions. A detailed analysis of some of the transfusion incident reports revealed complex deviations and/or failures of the procedures in place in the hospital, allowing the implementation of corrective and preventive measures. Thus, the haemovigilance system in place in the 'Centre Hospitalier Universitaire Vaudois, CHUV' appears to constitute an excellent instrument for monitoring the security of blood transfusion.
Resumo:
The speed of fault isolation is crucial for the design and reconfiguration of fault tolerant control (FTC). In this paper the fault isolation problem is stated as a constraint satisfaction problem (CSP) and solved using constraint propagation techniques. The proposed method is based on constraint satisfaction techniques and uncertainty space refining of interval parameters. In comparison with other approaches based on adaptive observers, the major advantage of the presented method is that the isolation speed is fast even taking into account uncertainty in parameters, measurements and model errors and without the monotonicity assumption. In order to illustrate the proposed approach, a case study of a nonlinear dynamic system is presented
Resumo:
In vivo dosimetry is a way to verify the radiation dose delivered to the patient in measuring the dose generally during the first fraction of the treatment. It is the only dose delivery control based on a measurement performed during the treatment. In today's radiotherapy practice, the dose delivered to the patient is planned using 3D dose calculation algorithms and volumetric images representing the patient. Due to the high accuracy and precision necessary in radiation treatments, national and international organisations like ICRU and AAPM recommend the use of in vivo dosimetry. It is also mandatory in some countries like France. Various in vivo dosimetry methods have been developed during the past years. These methods are point-, line-, plane- or 3D dose controls. A 3D in vivo dosimetry provides the most information about the dose delivered to the patient, with respect to ID and 2D methods. However, to our knowledge, it is generally not routinely applied to patient treatments yet. The aim of this PhD thesis was to determine whether it is possible to reconstruct the 3D delivered dose using transmitted beam measurements in the context of narrow beams. An iterative dose reconstruction method has been described and implemented. The iterative algorithm includes a simple 3D dose calculation algorithm based on the convolution/superposition principle. The methodology was applied to narrow beams produced by a conventional 6 MV linac. The transmitted dose was measured using an array of ion chambers, as to simulate the linear nature of a tomotherapy detector. We showed that the iterative algorithm converges quickly and reconstructs the dose within a good agreement (at least 3% / 3 mm locally), which is inside the 5% recommended by the ICRU. Moreover it was demonstrated on phantom measurements that the proposed method allows us detecting some set-up errors and interfraction geometry modifications. We also have discussed the limitations of the 3D dose reconstruction for dose delivery error detection. Afterwards, stability tests of the tomotherapy MVCT built-in onboard detector was performed in order to evaluate if such a detector is suitable for 3D in-vivo dosimetry. The detector showed stability on short and long terms comparable to other imaging devices as the EPIDs, also used for in vivo dosimetry. Subsequently, a methodology for the dose reconstruction using the tomotherapy MVCT detector is proposed in the context of static irradiations. This manuscript is composed of two articles and a script providing further information related to this work. In the latter, the first chapter introduces the state-of-the-art of in vivo dosimetry and adaptive radiotherapy, and explains why we are interested in performing 3D dose reconstructions. In chapter 2 a dose calculation algorithm implemented for this work is reviewed with a detailed description of the physical parameters needed for calculating 3D absorbed dose distributions. The tomotherapy MVCT detector used for transit measurements and its characteristics are described in chapter 3. Chapter 4 contains a first article entitled '3D dose reconstruction for narrow beams using ion chamber array measurements', which describes the dose reconstruction method and presents tests of the methodology on phantoms irradiated with 6 MV narrow photon beams. Chapter 5 contains a second article 'Stability of the Helical TomoTherapy HiArt II detector for treatment beam irradiations. A dose reconstruction process specific to the use of the tomotherapy MVCT detector is presented in chapter 6. A discussion and perspectives of the PhD thesis are presented in chapter 7, followed by a conclusion in chapter 8. The tomotherapy treatment device is described in appendix 1 and an overview of 3D conformai- and intensity modulated radiotherapy is presented in appendix 2. - La dosimétrie in vivo est une technique utilisée pour vérifier la dose délivrée au patient en faisant une mesure, généralement pendant la première séance du traitement. Il s'agit de la seule technique de contrôle de la dose délivrée basée sur une mesure réalisée durant l'irradiation du patient. La dose au patient est calculée au moyen d'algorithmes 3D utilisant des images volumétriques du patient. En raison de la haute précision nécessaire lors des traitements de radiothérapie, des organismes nationaux et internationaux tels que l'ICRU et l'AAPM recommandent l'utilisation de la dosimétrie in vivo, qui est devenue obligatoire dans certains pays dont la France. Diverses méthodes de dosimétrie in vivo existent. Elles peuvent être classées en dosimétrie ponctuelle, planaire ou tridimensionnelle. La dosimétrie 3D est celle qui fournit le plus d'information sur la dose délivrée. Cependant, à notre connaissance, elle n'est généralement pas appliquée dans la routine clinique. Le but de cette recherche était de déterminer s'il est possible de reconstruire la dose 3D délivrée en se basant sur des mesures de la dose transmise, dans le contexte des faisceaux étroits. Une méthode itérative de reconstruction de la dose a été décrite et implémentée. L'algorithme itératif contient un algorithme simple basé sur le principe de convolution/superposition pour le calcul de la dose. La dose transmise a été mesurée à l'aide d'une série de chambres à ionisations alignées afin de simuler la nature linéaire du détecteur de la tomothérapie. Nous avons montré que l'algorithme itératif converge rapidement et qu'il permet de reconstruire la dose délivrée avec une bonne précision (au moins 3 % localement / 3 mm). De plus, nous avons démontré que cette méthode permet de détecter certaines erreurs de positionnement du patient, ainsi que des modifications géométriques qui peuvent subvenir entre les séances de traitement. Nous avons discuté les limites de cette méthode pour la détection de certaines erreurs d'irradiation. Par la suite, des tests de stabilité du détecteur MVCT intégré à la tomothérapie ont été effectués, dans le but de déterminer si ce dernier peut être utilisé pour la dosimétrie in vivo. Ce détecteur a démontré une stabilité à court et à long terme comparable à d'autres détecteurs tels que les EPIDs également utilisés pour l'imagerie et la dosimétrie in vivo. Pour finir, une adaptation de la méthode de reconstruction de la dose a été proposée afin de pouvoir l'implémenter sur une installation de tomothérapie. Ce manuscrit est composé de deux articles et d'un script contenant des informations supplémentaires sur ce travail. Dans ce dernier, le premier chapitre introduit l'état de l'art de la dosimétrie in vivo et de la radiothérapie adaptative, et explique pourquoi nous nous intéressons à la reconstruction 3D de la dose délivrée. Dans le chapitre 2, l'algorithme 3D de calcul de dose implémenté pour ce travail est décrit, ainsi que les paramètres physiques principaux nécessaires pour le calcul de dose. Les caractéristiques du détecteur MVCT de la tomothérapie utilisé pour les mesures de transit sont décrites dans le chapitre 3. Le chapitre 4 contient un premier article intitulé '3D dose reconstruction for narrow beams using ion chamber array measurements', qui décrit la méthode de reconstruction et présente des tests de la méthodologie sur des fantômes irradiés avec des faisceaux étroits. Le chapitre 5 contient un second article intitulé 'Stability of the Helical TomoTherapy HiArt II detector for treatment beam irradiations'. Un procédé de reconstruction de la dose spécifique pour l'utilisation du détecteur MVCT de la tomothérapie est présenté au chapitre 6. Une discussion et les perspectives de la thèse de doctorat sont présentées au chapitre 7, suivies par une conclusion au chapitre 8. Le concept de la tomothérapie est exposé dans l'annexe 1. Pour finir, la radiothérapie «informationnelle 3D et la radiothérapie par modulation d'intensité sont présentées dans l'annexe 2.
Resumo:
The simultaneous use of multiple transmit and receive antennas can unleash very large capacity increases in rich multipath environments. Although such capacities can be approached by layered multi-antenna architectures with per-antenna rate control, the need for short-term feedback arises as a potential impediment, in particular as the number of antennas—and thus the number of rates to be controlled—increases. What we show, however, is that the need for short-term feedback in fact vanishes as the number of antennas and/or the diversity order increases. Specifically, the rate supported by each transmit antenna becomes deterministic and a sole function of the signal-to-noise, the ratio of transmit and receive antennas, and the decoding order, all of which are either fixed or slowly varying. More generally, we illustrate -through this specific derivation— the relevance of some established random CDMA results to the single-user multi-antenna problem.
Resumo:
Standard methods for the analysis of linear latent variable models oftenrely on the assumption that the vector of observed variables is normallydistributed. This normality assumption (NA) plays a crucial role inassessingoptimality of estimates, in computing standard errors, and in designinganasymptotic chi-square goodness-of-fit test. The asymptotic validity of NAinferences when the data deviates from normality has been calledasymptoticrobustness. In the present paper we extend previous work on asymptoticrobustnessto a general context of multi-sample analysis of linear latent variablemodels,with a latent component of the model allowed to be fixed across(hypothetical)sample replications, and with the asymptotic covariance matrix of thesamplemoments not necessarily finite. We will show that, under certainconditions,the matrix $\Gamma$ of asymptotic variances of the analyzed samplemomentscan be substituted by a matrix $\Omega$ that is a function only of thecross-product moments of the observed variables. The main advantage of thisis thatinferences based on $\Omega$ are readily available in standard softwareforcovariance structure analysis, and do not require to compute samplefourth-order moments. An illustration with simulated data in the context ofregressionwith errors in variables will be presented.
Resumo:
This paper analyses the correction of errors and mistakes made by students in the Foreign Language Teaching classroom. Its goal is to point out typical correction behaviors in Cape Verde in Language Teaching classrooms and raise teachers’ consciousness concerning better correction practice.
Resumo:
Atualmente, as situações de erros e fraudes, têm ocorrido com muita frequência a nível mundial. Por exemplo em Cabo Verde estas têm vindo a ganhar espaço nos mídias, onde a todo momento aparece casos de erros e fraudes, como por exemplo, os casos da Sociedade Cabo-verdiana de Tabacos, Banco Comercial do Atlântico, Caixa Económica, Câmara Municipal da Ribeira Brava, Associação Sport Club Moreirense, Sociedade de Segurança Industrial, Marítima e Comercial, Ministério das Finanças entre outros. Essas situações desfavoráveis para qualquer empresa, são derivadas de uma gestão menos cuidada dos recursos, e dos valores e princípios éticos cultivados pelas pessoas. O triângulo de fraude criada por Donald Cressey demonstra os motivos que leva um individuo a cometer atos fraudulentos, como sendo a motivação, a pressão e a oportunidade. Neste sentido, o controlo interno surge como uma ferramenta muito importante e fundamental para mitigar os riscos advenientes da ocorrência de erros e fraudes, suscetíveis de acontecer nas empresas. O controlo interno traduz num conjunto de medidas que protegem o património da empresa e garantem o cumprimento dos seus objetivos entretanto, como qualquer outra ferramenta de gestão está possui determinadas limitações que podem ser ultrapassadas com a utilização de alguns procedimentos básicos e, ou alternativos de controlo interno. É imprescindível que um sistema de controlo interno, para além de implementado, adequado e em funcionamento, seja mantido e acompanhado. O estudo dos casos da SILMAC, SA e da SCT, SA, mostram a importância que o controlo interno tem na prevenção e deteção de erros e fraudes, pois denota-se que as fraudes cometidas aconteceram na sequência de fraquezas de controlo interno e no excesso de confiança depositada nos colaboradores. Currently, the situations of errors and fraud have occurred very often in the world. Cape Verde in these situations have gained ground in the media, where every moment appears to errors and fraud cases, such as cases of “Sociedade cabo-verdiana de tabacos, Banco Comercial do Atlântico, Caixa Económica, Câmara Municipal da Ribeira Brava, Associação Sport Club Moreirense, Sociedade de segurança industrial, marítima e comercial, Ministério de finanças” amongst others. These situations which unfavorable for any company, are derived from a less carefully management of resources and ethical principles and values cultivated by people. The fraud triangle created by Donald Cressey, demonstrates reasons that lead an individual to commit fraudulent acts, such as motivation, opportunity and pressure. In this sense, internal control emerges as a very important tool to mitigate the risks arising from situations of errors and fraud, which are likely to happen in companies. The internal control translates into a set of measures that protect the assets of the business and ensure the fulfillment of its objectives however, like any other management tool is has limitations, however these can be overcome with the use of basic procedures, or alternative internal control. It is essential that an internal control system, in addition to implemented, adequate and functioning is maintained and monitored. The case studies of SILMAC and SCT, show the importance of internal control is the prevention and detection of errors and fraud. For note that the fraud occurred following oversights in its internal control and confidence in their employees.
Resumo:
Comptar amb sistemes sofisticats de gestió o programes ERP (Enterprise Resource Planning) no és suficient per a les organitzacions. Per a què aquests recursos donin resultats adequats i actualitzats, la informació d’entrada ha de llegir-se de forma automàtica, per aconseguir estalviar en recursos, eliminació d’errors i assegurar el compliment de la qualitat. Per aquest motiu és important comptar amb eines i serveis d’identificació automàtica i col•lecció de dades. Els principals objectius a assolir (a partir de la introducció al lector de la importància dels sistemes logístics d’identificació en un entorn global d’alta competitivitat), són conèixer i comprendre el funcionament de les tres principals tecnologies existents al mercat (codis de barres lineals, codis de barres bidimensionals i sistemes RFID), veure en quin estat d’implantació es troba cadascuna i les seves principals aplicacions. Un cop realitzat aquest primer estudi es pretén comparar les tres tecnologies per o poder obtenir perspectives de futur en l’àmbit de l’autoidentificació. A partir de la situació actual i de les necessitats de les empreses, juntament amb el meravellós món que sembla obrir la tecnologia RFID (Radio Frequency Identification), la principal conclusió a la que s’arribarà és que malgrat les limitacions tècniques dels codis de barres lineals, aquests es troben completament integrats a tota la cadena logística gràcies a l’estandarització i la utilització d’un llenguatge comú, sota el nom de simbologies GTIN (Global Trade Item Number), durant tota la cadena de subministres que garanteixen total traçabilitat dels productes gràcies en part a la gestió de les bases de dades i del flux d’informació. La tecnologia RFUD amb l’EPC (Electronic Product Code) supera aquestes limitacions, convertint-se en el màxim candidat per a substituir els limitats codis de barres. Tot i això, RFID, amb l’EPC, no serà un adequat identificador logístic fins que es superin importants barreres, com són la falta d’estandarització i l’elevat cost d’implantació.
Resumo:
This communication seeks to draw the attention of researchers and practitioners dealing with forensic DNA profiling analyses to the following question: is a scientist's report, offering support to a hypothesis according to which a particular individual is the source of DNA detected during the analysis of a stain, relevant from the point of view of a Court of Justice? This question relates to skeptical views previously voiced by commentators mainly in the judicial area, but is avoided by a large majority of forensic scientists. Notwithstanding, the pivotal role of this question has recently been evoked during the international conference "The hidden side of DNA profiles. Artifacts, errors and uncertain evidence" held in Rome (April 27th to 28th, 2012). Indeed, despite the fact that this conference brought together some of the world's leading forensic DNA specialists, it appeared clearly that a huge gap still exists between questions lawyers are actually interested in, and the answers that scientists deliver to Courts in written reports or during oral testimony. Participants in the justice system, namely lawyers and jurors on the one hand and forensic geneticists on the other, unfortunately talk considerably different languages. It thus is fundamental to address this issue of communication about results of forensic DNA analyses, and open a dialogue with practicing non-scientists at large who need to make meaningful use of scientific results to approach and help solve judicial cases. This paper intends to emphasize the actuality of this topic and suggest beneficial ways ahead towards a more reasoned use of forensic DNA in criminal proceedings.
Resumo:
OBJECTIVE: Clinical indicators are increasingly used to assess safety of patient care. In obstetrics, only a few indicators have been validated to date and none is used across specialties. The purpose of this study was to identify and assess for face and content validity a group of safety indicators that could be used by anaesthetists, obstetricians and neonatologists involved in labour and delivery units. MATERIALS AND METHODS: We first conducted a systematic review of the literature to identify potential measures. Indicators were then validated by a panel of 30 experts representing all specialties working in labour and delivery units. We used the Delphi method, an iterative questionnaire-based consensus seeking technique. Experts determined on a 7-point Likert scale (1=most representative/7=less representative) the soundness of each indicator as a measure of safety and their possible association with errors and complications caused by medical management. RESULTS: We identified 44 potential clinical indicators from the literature. Following the Delphi process, 13 indicators were considered as highly representative of safety during obstetrical care (mean score</=2.3). Experts ranked 6 of these indicators as being strongly associated to potential errors and complications. CONCLUSIONS: We identified and validated for face and content, a group of six clinical indicators to measure potentially preventable iatrogenic complications in labour and delivery units.
Resumo:
New Global Positioning System (GPS) receivers allow now to measure a location on earth at high frequency (5Hz) with a centimetric precision using phase differential positioning method. We studied whether such technique was accurate enough to retrieve basic parameters of human locomotion. Eight subjects walked on an athletics track at four different imposed step frequencies (70-130steps/min) plus a run at free pace. Differential carrier phase localization between a fixed base station and the mobile antenna mounted on the walking person was calculated. In parallel, a triaxial accelerometer, attached to the low back, recorded body accelerations. The different parameters were averaged for 150 consecutive steps of each run for each subject (total of 6000 steps analyzed). We observed a perfect correlation between average step duration measured by accelerometer and by GPS (r=0.9998, N=40). Two important parameters for the calculation of the external work of walking were also analyzed, namely the vertical lift of the trunk and the velocity variation per step. For an average walking speed of 4.0km/h, average vertical lift and velocity variation were, respectively, 4.8cm and 0.60km/h. The average intra-individual step-to-step variability at a constant speed, which includes GPS errors and the biological gait style variation, were found to be 24. 5% (coefficient of variation) for vertical lift and 44.5% for velocity variation. It is concluded that GPS technique can provide useful biomechanical parameters for the analysis of an unlimited number of strides in an unconstrained free-living environment.
Resumo:
BACKGROUND: Practicing physicians are faced with many medical decisions daily. These are mainly influenced by personal experience but should also consider patient preferences and the scientific evidence reflected by a constantly increasing number of medical publications and guidelines. With the objective of optimal medical treatment, the concept of evidence-based medicine is founded on these three aspects. It should be considered that there is a high risk of misinterpreting evidence, leading to medical errors and adverse effects without knowledge of the methodological background. OBJECTIVES: This article explains the concept of systematic error (bias) and its importance. Causes and effects as well as methods to minimize bias are discussed. This information should impart a deeper understanding, leading to a better assessment of studies and implementation of its recommendations in daily medical practice. CONCLUSION: Developed by the Cochrane Collaboration, the risk of bias (RoB) tool is an assessment instrument for the potential of bias in controlled trials. Good handling, short processing time, high transparency of judgements and a graphical presentation of findings that is easily comprehensible are among its strengths. Attached to this article the German translation of the RoB tool is published. This should facilitate the applicability for non-experts and moreover, support evidence-based medical decision-making.
Resumo:
The characterization and categorization of coarse aggregates for use in portland cement concrete (PCC) pavements is a highly refined process at the Iowa Department of Transportation. Over the past 10 to 15 years, much effort has been directed at pursuing direct testing schemes to supplement or replace existing physical testing schemes. Direct testing refers to the process of directly measuring the chemical and mineralogical properties of an aggregate and then attempting to correlate those measured properties to historical performance information (i.e., field service record). This is in contrast to indirect measurement techniques, which generally attempt to extrapolate the performance of laboratory test specimens to expected field performance. The purpose of this research project was to investigate and refine the use of direct testing methods, such as X-ray analysis techniques and thermal analysis techniques, to categorize carbonate aggregates for use in portland cement concrete. The results of this study indicated that the general testing methods that are currently used to obtain data for estimating service life tend to be very reliable and have good to excellent repeatability. Several changes in the current techniques were recommended to enhance the long-term reliability of the carbonate database. These changes can be summarized as follows: (a) Limits that are more stringent need to be set on the maximum particle size in the samples subjected to testing. This should help to improve the reliability of all three of the test methods studied during this project. (b) X-ray diffraction testing needs to be refined to incorporate the use of an internal standard. This will help to minimize the influence of sample positioning errors and it will also allow for the calculation of the concentration of the various minerals present in the samples. (c) Thermal analysis data needs to be corrected for moisture content and clay content prior to calculating the carbonate content of the sample.
Resumo:
El presente estudio identifica los errores de medicación y valora el grado de notificación de estos errores por parte del personal de enfermería en el Servicio de Medicina Intensiva (SMI), del Hospital Universitario Doctor Josep Trueta.Se realizará un estudio observacional, descriptivo y transversal en el hospital de referencia de las comarcas gerundenses durante el año 2013 y 2014.Los sujetos a estudio serán los profesionales enfermeros y los pacientes ingresados en la unidad. Las variables principales son, por un lado, el error de medicación y por otro la notificación del error.El procedimiento de recogida de datos se basará en proporcionar un cuestionario auto administrado al personal de enfermería, caracterizado por seis preguntas con respuestas cerradas, dos de las cuales tienen la opción de ser abiertas.Para el análisis estadístico se utilizará el programa SPSS. Para la obtención de los resultados se realizará un análisis descriptivo univariante. La variable “error de medicación” se expresará como número de casos y en 1.000 pacientes / día. Las demás variables se presentarán mediante frecuencias