970 resultados para grammatical errors
Resumo:
Nanoindentation is a valuable tool for characterization of biomaterials due to its ability to measure local properties in heterogeneous, small or irregularly shaped samples. However, applying nanoindentation to compliant, hydrated biomaterials leads to many challenges including adhesion between the nanoindenter tip and the sample. Although adhesion leads to overestimation of the modulus of compliant samples when analyzing nanoindentation data using traditional analysis techniques, most studies of biomaterials have ignored its effects. This paper demonstrates two methods for managing adhesion in nanoindentation analysis, the nano-JKR force curve method and the surfactant method, through application to two biomedically-relevant compliant materials, poly(dimethyl siloxane) (PDMS) elastomers and poly(ethylene glycol) (PEG) hydrogels. The nano-JKR force curve method accounts for adhesion during data analysis using equations based on the Johnson-Kendall-Roberts (JKR) adhesion model, while the surfactant method eliminates adhesion during data collection, allowing data analysis using traditional techniques. In this study, indents performed in air or water resulted in adhesion between the tip and the sample, while testing the same materials submerged in Optifree Express() contact lens solution eliminated tip-sample adhesion in most samples. Modulus values from the two methods were within 7% of each other, despite different hydration conditions and evidence of adhesion. Using surfactant also did not significantly alter the properties of the tested material, allowed accurate modulus measurements using commercial software, and facilitated nanoindentation testing in fluids. This technique shows promise for more accurate and faster determination of modulus values from nanoindentation of compliant, hydrated biological samples. Copyright 2013 Elsevier Ltd. All rights reserved.
Resumo:
Whether the use of mobile phones is a risk factor for brain tumors in adolescents is currently being studied. Case--control studies investigating this possible relationship are prone to recall error and selection bias. We assessed the potential impact of random and systematic recall error and selection bias on odds ratios (ORs) by performing simulations based on real data from an ongoing case--control study of mobile phones and brain tumor risk in children and adolescents (CEFALO study). Simulations were conducted for two mobile phone exposure categories: regular and heavy use. Our choice of levels of recall error was guided by a validation study that compared objective network operator data with the self-reported amount of mobile phone use in CEFALO. In our validation study, cases overestimated their number of calls by 9% on average and controls by 34%. Cases also overestimated their duration of calls by 52% on average and controls by 163%. The participation rates in CEFALO were 83% for cases and 71% for controls. In a variety of scenarios, the combined impact of recall error and selection bias on the estimated ORs was complex. These simulations are useful for the interpretation of previous case-control studies on brain tumor and mobile phone use in adults as well as for the interpretation of future studies on adolescents.
Resumo:
OBJECTIVES: To analyse the frequency of and identify risk factors for patient-reported medical errors in Switzerland. The joint effect of risk factors on error-reporting probability was modelled for hypothetical patients. METHODS: A representative population sample of Swiss citizens (n = 1306) was surveyed as part of the Commonwealth Fund’s 2010 lnternational Survey of the General Public’s Views of their Health Care System’s Performance in Eleven Countries. Data on personal background, utilisation of health care, coordination of care problems and reported errors were assessed. Logistic regression analysis was conducted to identify risk factors for patients’ reports of medical mistakes and medication errors. RESULTS: 11.4% of participants reported at least one error in their care in the previous two years (8% medical errors, 5.3% medication errors). Poor coordination of care experiences was frequent. 7.8% experienced that test results or medical records were not available, 17.2% received conflicting information from care providers and 11.5% reported that tests were ordered although they had been done before. Age (OR = 0.98, p = 0.014), poor health (OR = 2.95, p = 0.007), utilisation of emergency care (OR = 2.45, p = 0.003), inpatient-stay (OR = 2.31, p = 0.010) and poor care coordination (OR = 5.43, p <0.001) are important predictors for reporting error. For high utilisers of care that unify multiple risk factors the probability that errors are reported rises up to p = 0.8. CONCLUSIONS: Patient safety remains a major challenge for the Swiss health care system. Despite the health related and economic burden associated with it, the widespread experience of medical error in some subpopulations also has the potential to erode trust in the health care system as a whole.
Resumo:
For the development of meniscal substitutes and related finite element models it is necessary to know the mechanical properties of the meniscus and its attachments. Measurement errors can falsify the determination of material properties. Therefore the impact of metrological and geometrical measurement errors on the determination of the linear modulus of human meniscal attachments was investigated. After total differentiation the error of the force (+0.10%), attachment deformation (−0.16%), and fibre length (+0.11%) measurements almost annulled each other. The error of the cross-sectional area determination ranged from 0.00%, gathered from histological slides, up to 14.22%, obtained from digital calliper measurements. Hence, total measurement error ranged from +0.05% to −14.17%, predominantly affected by the cross-sectional area determination error. Further investigations revealed that the entire cross-section was significantly larger compared to the load-carrying collagen fibre area. This overestimation of the cross-section area led to an underestimation of the linear modulus of up to −36.7%. Additionally, the cross-sections of the collagen-fibre area of the attachments significantly varied up to +90% along their longitudinal axis. The resultant ratio between the collagen fibre area and the histologically determined cross-sectional area ranged between 0.61 for the posterolateral and 0.69 for the posteromedial ligament. The linear modulus of human meniscal attachments can be significantly underestimated due to the use of different methods and locations of cross-sectional area determination. Hence, it is suggested to assess the load carrying collagen fibre area histologically, or, alternatively, to use the correction factors proposed in this study.
Resumo:
The purpose of this study was (1) to determine frequency and type of medication errors (MEs), (2) to assess the number of MEs prevented by registered nurses, (3) to assess the consequences of ME for patients, and (4) to compare the number of MEs reported by a newly developed medication error self-reporting tool to the number reported by the traditional incident reporting system. We conducted a cross-sectional study on ME in the Cardiovascular Surgery Department of Bern University Hospital in Switzerland. Eligible registered nurses (n = 119) involving in the medication process were included. Data on ME were collected using an investigator-developed medication error self reporting tool (MESRT) that asked about the occurrence and characteristics of ME. Registered nurses were instructed to complete a MESRT at the end of each shift even if there was no ME. All MESRTs were completed anonymously. During the one-month study period, a total of 987 MESRTs were returned. Of the 987 completed MESRTs, 288 (29%) indicated that there had been an ME. Registered nurses reported preventing 49 (5%) MEs. Overall, eight (2.8%) MEs had patient consequences. The high response rate suggests that this new method may be a very effective approach to detect, report, and describe ME in hospitals.
Resumo:
Mr. Kubon's project was inspired by the growing need for an automatic, syntactic analyser (parser) of Czech, which could be used in the syntactic processing of large amounts of texts. Mr. Kubon notes that such a tool would be very useful, especially in the field of corpus linguistics, where creating a large-scale "tree bank" (a collection of syntactic representations of natural language sentences) is a very important step towards the investigation of the properties of a given language. The work involved in syntactically parsing a whole corpus in order to get a representative set of syntactic structures would be almost inconceivable without the help of some kind of robust (semi)automatic parser. The need for the automatic natural language parser to be robust increases with the size of the linguistic data in the corpus or in any other kind of text which is going to be parsed. Practical experience shows that apart from syntactically correct sentences, there are many sentences which contain a "real" grammatical error. These sentences may be corrected in small-scale texts, but not generally in the whole corpus. In order to be able to complete the overall project, it was necessary to address a number of smaller problems. These were; 1. the adaptation of a suitable formalism able to describe the formal grammar of the system; 2. the definition of the structure of the system's dictionary containing all relevant lexico-syntactic information, and the development of a formal grammar able to robustly parse Czech sentences from the test suite; 3. filling the syntactic dictionary with sample data allowing the system to be tested and debugged during its development (about 1000 words); 4. the development of a set of sample sentences containing a reasonable amount of grammatical and ungrammatical phenomena covering some of the most typical syntactic constructions being used in Czech. Number 3, building a formal grammar, was the main task of the project. The grammar is of course far from complete (Mr. Kubon notes that it is debatable whether any formal grammar describing a natural language may ever be complete), but it covers the most frequent syntactic phenomena, allowing for the representation of a syntactic structure of simple clauses and also the structure of certain types of complex sentences. The stress was not so much on building a wide coverage grammar, but on the description and demonstration of a method. This method uses a similar approach as that of grammar-based grammar checking. The problem of reconstructing the "correct" form of the syntactic representation of a sentence is closely related to the problem of localisation and identification of syntactic errors. Without a precise knowledge of the nature and location of syntactic errors it is not possible to build a reliable estimation of a "correct" syntactic tree. The incremental way of building the grammar used in this project is also an important methodological issue. Experience from previous projects showed that building a grammar by creating a huge block of metarules is more complicated than the incremental method, which begins with the metarules covering most common syntactic phenomena first, and adds less important ones later, especially from the point of view of testing and debugging the grammar. The sample of the syntactic dictionary containing lexico-syntactical information (task 4) now has slightly more than 1000 lexical items representing all classes of words. During the creation of the dictionary it turned out that the task of assigning complete and correct lexico-syntactic information to verbs is a very complicated and time-consuming process which would itself be worth a separate project. The final task undertaken in this project was the development of a method allowing effective testing and debugging of the grammar during the process of its development. The problem of the consistency of new and modified rules of the formal grammar with the rules already existing is one of the crucial problems of every project aiming at the development of a large-scale formal grammar of a natural language. This method allows for the detection of any discrepancy or inconsistency of the grammar with respect to a test-bed of sentences containing all syntactic phenomena covered by the grammar. This is not only the first robust parser of Czech, but also one of the first robust parsers of a Slavic language. Since Slavic languages display a wide range of common features, it is reasonable to claim that this system may serve as a pattern for similar systems in other languages. To transfer the system into any other language it is only necessary to revise the grammar and to change the data contained in the dictionary (but not necessarily the structure of primary lexico-syntactic information). The formalism and methods used in this project can be used in other Slavic languages without substantial changes.
Resumo:
The first outcome of this project was a synchronous description of the most widely spoken Romani dialect in the Czech and Slovak Republics, aimed at teachers and lecturers of the Romani language. This is intended to serve as a methodological guide for the demonstration of various grammatical phenomena, but may also assist people who want a basic knowledge of the linguistic structure of this neo-Indian language. The grammatical material is divided into 23 chapters, in a sequence which may be followed in teaching or studying. The book includes examples of the grammatical elements, but not exercises or articles. The second work produced was a textbook of Slovak Romani, which is the most detailed in the Czech or Slovak Republics to date. It is aimed at all those interested in active use of the Romani language: high school and university students, people working with the Roma, and Roma who speak little or nothing of the language of their forebears, The book includes 34 lessons, each containing relevant Romani tests (articles and dialogues), a short vocabulary list, grammatical explanations, exercises and examples of Romani written or oral expression. The textbook also contains a considerable amount of ethno-cultural information and notes on the life and traditions of the Roman, as well as pointing out some differences between different dialects. A brief Romani-Czech phrase book is included as an appendix.
Resumo:
High-throughput SNP arrays provide estimates of genotypes for up to one million loci, often used in genome-wide association studies. While these estimates are typically very accurate, genotyping errors do occur, which can influence in particular the most extreme test statistics and p-values. Estimates for the genotype uncertainties are also available, although typically ignored. In this manuscript, we develop a framework to incorporate these genotype uncertainties in case-control studies for any genetic model. We verify that using the assumption of a “local alternative” in the score test is very reasonable for effect sizes typically seen in SNP association studies, and show that the power of the score test is simply a function of the correlation of the genotype probabilities with the true genotypes. We demonstrate that the power to detect a true association can be substantially increased for difficult to call genotypes, resulting in improved inference in association studies.
Resumo:
Despite the widespread popularity of linear models for correlated outcomes (e.g. linear mixed models and time series models), distribution diagnostic methodology remains relatively underdeveloped in this context. In this paper we present an easy-to-implement approach that lends itself to graphical displays of model fit. Our approach involves multiplying the estimated margional residual vector by the Cholesky decomposition of the inverse of the estimated margional variance matrix. The resulting "rotated" residuals are used to construct an empirical cumulative distribution function and pointwise standard errors. The theoretical framework, including conditions and asymptotic properties, involves technical details that are motivated by Lange and Ryan (1989), Pierce (1982), and Randles (1982). Our method appears to work well in a variety of circumstances, including models having independent units of sampling (clustered data) and models for which all observations are correlated (e.g., a single time series). Our methods can produce satisfactory results even for models that do not satisfy all of the technical conditions stated in our theory.
Resumo:
Multi-site time series studies of air pollution and mortality and morbidity have figured prominently in the literature as comprehensive approaches for estimating acute effects of air pollution on health. Hierarchical models are generally used to combine site-specific information and estimate pooled air pollution effects taking into account both within-site statistical uncertainty, and across-site heterogeneity. Within a site, characteristics of time series data of air pollution and health (small pollution effects, missing data, highly correlated predictors, non linear confounding etc.) make modelling all sources of uncertainty challenging. One potential consequence is underestimation of the statistical variance of the site-specific effects to be combined. In this paper we investigate the impact of variance underestimation on the pooled relative rate estimate. We focus on two-stage normal-normal hierarchical models and on under- estimation of the statistical variance at the first stage. By mathematical considerations and simulation studies, we found that variance underestimation does not affect the pooled estimate substantially. However, some sensitivity of the pooled estimate to variance underestimation is observed when the number of sites is small and underestimation is severe. These simulation results are applicable to any two-stage normal-normal hierarchical model for combining information of site-specific results, and they can be easily extended to more general hierarchical formulations. We also examined the impact of variance underestimation on the national average relative rate estimate from the National Morbidity Mortality Air Pollution Study and we found that variance underestimation as much as 40% has little effect on the national average.
Resumo:
Medical errors originating in health care facilities are a significant source of preventable morbidity, mortality, and healthcare costs. Voluntary error report systems that collect information on the causes and contributing factors of medi- cal errors regardless of the resulting harm may be useful for developing effective harm prevention strategies. Some patient safety experts question the utility of data from errors that did not lead to harm to the patient, also called near misses. A near miss (a.k.a. close call) is an unplanned event that did not result in injury to the patient. Only a fortunate break in the chain of events prevented injury. We use data from a large voluntary reporting system of 836,174 medication errors from 1999 to 2005 to provide evidence that the causes and contributing factors of errors that result in harm are similar to the causes and contributing factors of near misses. We develop Bayesian hierarchical models for estimating the log odds of selecting a given cause (or contributing factor) of error given harm has occurred and the log odds of selecting the same cause given that harm did not occur. The posterior distribution of the correlation between these two vectors of log-odds is used as a measure of the evidence supporting the use of data from near misses and their causes and contributing factors to prevent medical errors. In addition, we identify the causes and contributing factors that have the highest or lowest log-odds ratio of harm versus no harm. These causes and contributing factors should also be a focus in the design of prevention strategies. This paper provides important evidence on the utility of data from near misses, which constitute the vast majority of errors in our data.
Resumo:
The synchronization of dynamic multileaf collimator (DMLC) response with respiratory motion is critical to ensure the accuracy of DMLC-based four dimensional (4D) radiation delivery. In practice, however, a finite time delay (response time) between the acquisition of tumor position and multileaf collimator response necessitates predictive models of respiratory tumor motion to synchronize radiation delivery. Predicting a complex process such as respiratory motion introduces geometric errors, which have been reported in several publications. However, the dosimetric effect of such errors on 4D radiation delivery has not yet been investigated. Thus, our aim in this work was to quantify the dosimetric effects of geometric error due to prediction under several different conditions. Conformal and intensity modulated radiation therapy (IMRT) plans for a lung patient were generated for anterior-posterior/posterior-anterior (AP/PA) beam arrangements at 6 and 18 MV energies to provide planned dose distributions. Respiratory motion data was obtained from 60 diaphragm-motion fluoroscopy recordings from five patients. A linear adaptive filter was employed to predict the tumor position. The geometric error of prediction was defined as the absolute difference between predicted and actual positions at each diaphragm position. Distributions of geometric error of prediction were obtained for all of the respiratory motion data. Planned dose distributions were then convolved with distributions for the geometric error of prediction to obtain convolved dose distributions. The dosimetric effect of such geometric errors was determined as a function of several variables: response time (0-0.6 s), beam energy (6/18 MV), treatment delivery (3D/4D), treatment type (conformal/IMRT), beam direction (AP/PA), and breathing training type (free breathing/audio instruction/visual feedback). Dose difference and distance-to-agreement analysis was employed to quantify results. Based on our data, the dosimetric impact of prediction (a) increased with response time, (b) was larger for 3D radiation therapy as compared with 4D radiation therapy, (c) was relatively insensitive to change in beam energy and beam direction, (d) was greater for IMRT distributions as compared with conformal distributions, (e) was smaller than the dosimetric impact of latency, and (f) was greatest for respiration motion with audio instructions, followed by visual feedback and free breathing. Geometric errors of prediction that occur during 4D radiation delivery introduce dosimetric errors that are dependent on several factors, such as response time, treatment-delivery type, and beam energy. Even for relatively small response times of 0.6 s into the future, dosimetric errors due to prediction could approach delivery errors when respiratory motion is not accounted for at all. To reduce the dosimetric impact, better predictive models and/or shorter response times are required.