979 resultados para knowledge testing
Resumo:
Background. There is emerging evidence that context is important for successful transfer of research knowledge into health care practice. The Alberta Context Tool (ACT) is a Canadian developed research-based instrument that assesses 10 modifiable concepts of organizational context considered important for health care professionals’ use of evidence. Swedish and Canadian health care have similarities in terms of organisational and professional aspects, suggesting that the ACT could be used for measuring context in Sweden. This paper reports on the translation of the ACT to Swedish and a testing of preliminary aspects of its validity, acceptability and reliability in Swedish elder care. Methods. The ACT was translated into Swedish and back-translated into English before being pilot tested in ten elder care facilities for response processes validity, acceptability and reliability (Cronbach’s alpha). Subsequently, further modification was performed. Results. In the pilot test, the nurses found the questions easy to respond to (52%) and relevant (65%), yet the questions’ clarity were mainly considered ‘neither clear nor unclear’ (52%). Missing data varied between 0 (0%) and 19 (12%) per item, the most common being 1 missing case per item (15 items). Internal consistency (Cronbach’s Alpha > .70) was reached for 5 out of 8 contextual concepts. Translation and back translation identified 21 linguistic- and semantic related issues and 3 context related deviations, resolved by developers and translators. Conclusion. Modifying an instrument is a detailed process, requiring time and consideration of the linguistic and semantic aspects of the instrument, and understanding of the context where the instrument was developed and where it is to be applied. A team, including the instrument’s developers, translators, and researchers is necessary to ensure a valid translation. This study suggests preliminary validity, reliability and acceptability evidence for the ACT when used with nurses in Swedish elder care.
Resumo:
A system built in terms of autonomous agents may require even greater correctness assurance than one which is merely reacting to the immediate control of its users. Agents make substantial decisions for themselves, so thorough testing is an important consideration. However, autonomy also makes testing harder; by their nature, autonomous agents may react in different ways to the same inputs over time, because, for instance they have changeable goals and knowledge. For this reason, we argue that testing of autonomous agents requires a procedure that caters for a wide range of test case contexts, and that can search for the most demanding of these test cases, even when they are not apparent to the agents’ developers. In this paper, we address this problem, introducing and evaluating an approach to testing autonomous agents that uses evolutionary optimization to generate demanding test cases.
Resumo:
Abstract Background Recent medical and biological technology advances have stimulated the development of new testing systems that have been providing huge, varied amounts of molecular and clinical data. Growing data volumes pose significant challenges for information processing systems in research centers. Additionally, the routines of genomics laboratory are typically characterized by high parallelism in testing and constant procedure changes. Results This paper describes a formal approach to address this challenge through the implementation of a genetic testing management system applied to human genome laboratory. We introduced the Human Genome Research Center Information System (CEGH) in Brazil, a system that is able to support constant changes in human genome testing and can provide patients updated results based on the most recent and validated genetic knowledge. Our approach uses a common repository for process planning to ensure reusability, specification, instantiation, monitoring, and execution of processes, which are defined using a relational database and rigorous control flow specifications based on process algebra (ACP). The main difference between our approach and related works is that we were able to join two important aspects: 1) process scalability achieved through relational database implementation, and 2) correctness of processes using process algebra. Furthermore, the software allows end users to define genetic testing without requiring any knowledge about business process notation or process algebra. Conclusions This paper presents the CEGH information system that is a Laboratory Information Management System (LIMS) based on a formal framework to support genetic testing management for Mendelian disorder studies. We have proved the feasibility and showed usability benefits of a rigorous approach that is able to specify, validate, and perform genetic testing using easy end user interfaces.
Resumo:
In this work we aim to propose a new approach for preliminary epidemiological studies on Standardized Mortality Ratios (SMR) collected in many spatial regions. A preliminary study on SMRs aims to formulate hypotheses to be investigated via individual epidemiological studies that avoid bias carried on by aggregated analyses. Starting from collecting disease counts and calculating expected disease counts by means of reference population disease rates, in each area an SMR is derived as the MLE under the Poisson assumption on each observation. Such estimators have high standard errors in small areas, i.e. where the expected count is low either because of the low population underlying the area or the rarity of the disease under study. Disease mapping models and other techniques for screening disease rates among the map aiming to detect anomalies and possible high-risk areas have been proposed in literature according to the classic and the Bayesian paradigm. Our proposal is approaching this issue by a decision-oriented method, which focus on multiple testing control, without however leaving the preliminary study perspective that an analysis on SMR indicators is asked to. We implement the control of the FDR, a quantity largely used to address multiple comparisons problems in the eld of microarray data analysis but which is not usually employed in disease mapping. Controlling the FDR means providing an estimate of the FDR for a set of rejected null hypotheses. The small areas issue arises diculties in applying traditional methods for FDR estimation, that are usually based only on the p-values knowledge (Benjamini and Hochberg, 1995; Storey, 2003). Tests evaluated by a traditional p-value provide weak power in small areas, where the expected number of disease cases is small. Moreover tests cannot be assumed as independent when spatial correlation between SMRs is expected, neither they are identical distributed when population underlying the map is heterogeneous. The Bayesian paradigm oers a way to overcome the inappropriateness of p-values based methods. Another peculiarity of the present work is to propose a hierarchical full Bayesian model for FDR estimation in testing many null hypothesis of absence of risk.We will use concepts of Bayesian models for disease mapping, referring in particular to the Besag York and Mollié model (1991) often used in practice for its exible prior assumption on the risks distribution across regions. The borrowing of strength between prior and likelihood typical of a hierarchical Bayesian model takes the advantage of evaluating a singular test (i.e. a test in a singular area) by means of all observations in the map under study, rather than just by means of the singular observation. This allows to improve the power test in small areas and addressing more appropriately the spatial correlation issue that suggests that relative risks are closer in spatially contiguous regions. The proposed model aims to estimate the FDR by means of the MCMC estimated posterior probabilities b i's of the null hypothesis (absence of risk) for each area. An estimate of the expected FDR conditional on data (\FDR) can be calculated in any set of b i's relative to areas declared at high-risk (where thenull hypothesis is rejected) by averaging the b i's themselves. The\FDR can be used to provide an easy decision rule for selecting high-risk areas, i.e. selecting as many as possible areas such that the\FDR is non-lower than a prexed value; we call them\FDR based decision (or selection) rules. The sensitivity and specicity of such rule depend on the accuracy of the FDR estimate, the over-estimation of FDR causing a loss of power and the under-estimation of FDR producing a loss of specicity. Moreover, our model has the interesting feature of still being able to provide an estimate of relative risk values as in the Besag York and Mollié model (1991). A simulation study to evaluate the model performance in FDR estimation accuracy, sensitivity and specificity of the decision rule, and goodness of estimation of relative risks, was set up. We chose a real map from which we generated several spatial scenarios whose counts of disease vary according to the spatial correlation degree, the size areas, the number of areas where the null hypothesis is true and the risk level in the latter areas. In summarizing simulation results we will always consider the FDR estimation in sets constituted by all b i's selected lower than a threshold t. We will show graphs of the\FDR and the true FDR (known by simulation) plotted against a threshold t to assess the FDR estimation. Varying the threshold we can learn which FDR values can be accurately estimated by the practitioner willing to apply the model (by the closeness between\FDR and true FDR). By plotting the calculated sensitivity and specicity (both known by simulation) vs the\FDR we can check the sensitivity and specicity of the corresponding\FDR based decision rules. For investigating the over-smoothing level of relative risk estimates we will compare box-plots of such estimates in high-risk areas (known by simulation), obtained by both our model and the classic Besag York Mollié model. All the summary tools are worked out for all simulated scenarios (in total 54 scenarios). Results show that FDR is well estimated (in the worst case we get an overestimation, hence a conservative FDR control) in small areas, low risk levels and spatially correlated risks scenarios, that are our primary aims. In such scenarios we have good estimates of the FDR for all values less or equal than 0.10. The sensitivity of\FDR based decision rules is generally low but specicity is high. In such scenario the use of\FDR = 0:05 or\FDR = 0:10 based selection rule can be suggested. In cases where the number of true alternative hypotheses (number of true high-risk areas) is small, also FDR = 0:15 values are well estimated, and \FDR = 0:15 based decision rules gains power maintaining an high specicity. On the other hand, in non-small areas and non-small risk level scenarios the FDR is under-estimated unless for very small values of it (much lower than 0.05); this resulting in a loss of specicity of a\FDR = 0:05 based decision rule. In such scenario\FDR = 0:05 or, even worse,\FDR = 0:1 based decision rules cannot be suggested because the true FDR is actually much higher. As regards the relative risk estimation, our model achieves almost the same results of the classic Besag York Molliè model. For this reason, our model is interesting for its ability to perform both the estimation of relative risk values and the FDR control, except for non-small areas and large risk level scenarios. A case of study is nally presented to show how the method can be used in epidemiology.
Resumo:
The Székesfehérvár Ruin Garden is a unique assemblage of monuments belonging to the cultural heritage of Hungary due to its important role in the Middle Ages as the coronation and burial church of the Kings of the Hungarian Christian Kingdom. It has been nominated for “National Monument” and as a consequence, its protection in the present and future is required. Moreover, it was reconstructed and expanded several times throughout Hungarian history. By a quick overview of the current state of the monument, the presence of several lithotypes can be found among the remained building and decorative stones. Therefore, the research related to the materials is crucial not only for the conservation of that specific monument but also for other historic structures in Central Europe. The current research is divided in three main parts: i) description of lithologies and their provenance, ii) physical properties testing of historic material and iii) durability tests of analogous stones obtained from active quarries. The survey of the National Monument of Székesfehérvár, focuses on the historical importance and the architecture of the monument, the different construction periods, the identification of the different building stones and their distribution in the remaining parts of the monument and it also included provenance analyses. The second one was the in situ and laboratory testing of physical properties of historic material. As a final phase samples were taken from local quarries with similar physical and mineralogical characteristics to the ones used in the monument. The three studied lithologies are: fine oolitic limestone, a coarse oolitic limestone and a red compact limestone. These stones were used for rock mechanical and durability tests under laboratory conditions. The following techniques were used: a) in-situ: Schmidt Hammer Values, moisture content measurements, DRMS, mapping (construction ages, lithotypes, weathering forms) b) laboratory: petrographic analysis, XRD, determination of real density by means of helium pycnometer and bulk density by means of mercury pycnometer, pore size distribution by mercury intrusion porosimetry and by nitrogen adsorption, water absorption, determination of open porosity, DRMS, frost resistance, ultrasonic pulse velocity test, uniaxial compressive strength test and dynamic modulus of elasticity. The results show that initial uniaxial compressive strength is not necessarily a clear indicator of the stone durability. Bedding and other lithological heterogeneities can influence the strength and durability of individual specimens. In addition, long-term behaviour is influenced by exposure conditions, fabric and, especially, the pore size distribution of each sample. Therefore, a statistic evaluation of the results is highly recommended and they should be evaluated in combination with other investigations on internal structure and micro-scale heterogeneities of the material, such as petrographic observation, ultrasound pulse velocity and porosimetry. Laboratory tests used to estimate the durability of natural stone may give a good guidance to its short-term performance but they should not be taken as an ultimate indication of the long-term behaviour of the stone. The interdisciplinary study of the results confirms that stones in the monument show deterioration in terms of mineralogy, fabric and physical properties in comparison with quarried stones. Moreover stone-testing proves compatibility between quarried and historical stones. Good correlation is observed between the non-destructive-techniques and laboratory tests results which allow us to minimize sampling and assessing the condition of the materials. Concluding, this research can contribute to the diagnostic knowledge for further studies that are needed in order to evaluate the effect of recent and future protective measures.
Resumo:
Despite the many proposed advantages related to nanotechnology, there are increasing concerns as to the potential adverse human health and environmental effects that the production of, and subsequent exposure to nanoparticles (NPs) might pose. In regard to human health, these concerns are founded upon the plethora of knowledge gained from research relating to the effects observed following exposure to environmental air pollution. It is known that increased exposure to environmental air pollution can cause reduced respiratory health, as well as exacerbate pre-existing conditions such as cardiovascular disease and chronic obstructive pulmonary disease. Such disease states have also been associated with exposure to the NP component contained within environmental air pollution, raising concerns as to the effects of NP exposure. It is not only exposure to accidentally produced NPs however, which should be approached with caution. Over the past decades, NPs have been specifically engineered for a wide range of consumer, industrial and technological applications. Due to the inevitable exposure of NPs to humans, owing to their use in such applications, it is therefore imperative that an understanding of how NPs interact with the human body is gained. In vivo research poses a beneficial model for gaining immediate and direct knowledge of human exposure to such xenobiotics. This research outlook however, has numerous limitations. Increased research using in vitro models has therefore been performed, as these models provide an inexpensive and high-throughput alternative to in vivo research strategies. Despite such advantages, there are also various restrictions in regard to in vitro research. Therefore, the aim of this review, in addition to providing a short perspective upon the field of nanotoxicology, is to discuss (1) the advantages and disadvantages of in vitro research and (2) how in vitro research may provide essential information pertaining to the human health risks posed by NP exposure.
Resumo:
When reengineering legacy systems, it is crucial to assess if the legacy behavior has been preserved or how it changed due to the reengineering effort. Ideally if a legacy system is covered by tests, running the tests on the new version can identify potential differences or discrepancies. However, writing tests for an unknown and large system is difficult due to the lack of internal knowledge. It is especially difficult to bring the system to an appropriate state. Our solution is based on the acknowledgment that one of the few trustable piece of information available when approaching a legacy system is the running system itself. Our approach reifies the execution traces and uses logic programming to express tests on them. Thereby it eliminates the need to programatically bring the system in a particular state, and handles the test-writer a high-level abstraction mechanism to query the trace. The resulting system, called TESTLOG, was used on several real-world case studies to validate our claims.
Resumo:
Due to widespread development of anthelmintic resistance in equine parasites, recommendations for their control are currently undergoing marked changes with a shift of emphasis toward more coprological surveillance and reduced treatment intensity. Denmark was the first nation to introduce prescription-only restrictions of anthelmintic drugs in 1999, but other European countries have implemented similar legislations over recent years. A questionnaire survey was performed in 2008 among Danish horse owners to provide a current status of practices and perceptions with relation to parasite control. Questions aimed at describing the current use of coprological surveillance and resulting anthelmintic treatment intensities, evaluating knowledge and perceptions about the importance of various attributes of parasite control, and assessing respondents' willingness to pay for advice and parasite surveillance services from their veterinarians. A total of 1060 respondents completed the questionnaire. A large majority of respondents (71.9%) were familiar with the concept of selective therapy. Results illustrated that the respondents' self-evaluation of their knowledge about parasites and their control associated significantly with their level of interest in the topic and their type of education (P<0.0001). The large majority of respondents either dewormed their horses twice a year and/or performed two fecal egg counts per horse per year. This approach was almost equally pronounced in foals, horses aged 1-3 years old, and adult horses. The respondents rated prevention of parasitic disease and prevention of drug resistance as the most important attributes, while cost and frequent fecal testing were rated least important. Respondents' actual spending on parasite control per horse in the previous year correlated significantly with the amount they declared themselves willing to spend (P<0.0001). However, 44.4% declared themselves willing to pay more than what they were spending. Altogether, results indicate that respondents were generally familiar with equine parasites and the concept of selective therapy, although there was some confusion over the terms small and large strongyles. They used a large degree of fecal surveillance in all age groups, with a majority of respondents sampling and/or treating around twice a year. Finally, respondents appeared willing to spend money on parasite control for their horses. It is of concern that the survey suggested that foals and young horses are treated in a manner very similar to adult horses, which is against current recommendations. Thus, the survey illustrates the importance of clear communication of guidelines for equine parasite control.
Resumo:
Students arrive at classes with a varying social situations and course subject knowledge. Blackboard is a web based course delivery program that permits testing of students before arriving at the first class. A pretest was used to assess preexisting subject knowledge(S) and a survey was used to assess non-subject (N) factors that might impact the student’s final grade. A posttest was administered after all content was delivered and used to access change in S. [See PDF for complete abstract]
Resumo:
We provide a novel search technique which uses a hierarchical model and a mutual information gain heuristic to efficiently prune the search space when localizing faces in images. We show exponential gains in computation over traditional sliding window approaches, while keeping similar performance levels.
Resumo:
OBJECTIVE To systematically review evidence on genetic risk factors for carbamazepine (CBZ)-induced hypersensitivity reactions (HSRs) and provide practice recommendations addressing the key questions: (1) Should genetic testing for HLA-B*15:02 and HLA-A*31:01 be performed in patients with an indication for CBZ therapy to reduce the occurrence of CBZ-induced HSRs? (2) Are there subgroups of patients who may benefit more from genetic testing for HLA-B*15:02 or HLA-A*31:01 compared to others? (3) How should patients with an indication for CBZ therapy be managed based on their genetic test results? METHODS A systematic literature search was performed for HLA-B*15:02 and HLA-A*31:01 and their association with CBZ-induced HSRs. Evidence was critically appraised and clinical practice recommendations were developed based on expert group consensus. RESULTS Patients carrying HLA-B*15:02 are at strongly increased risk for CBZ-induced Stevens-Johnson syndrome/toxic epidermal necrolysis (SJS/TEN) in populations where HLA-B*15:02 is common, but not CBZ-induced hypersensitivity syndrome (HSS) or maculopapular exanthema (MPE). HLA-B*15:02-positive patients with CBZ-SJS/TEN have been reported from Asian countries only, including China, Thailand, Malaysia, and India. HLA-B*15:02 is rare among Caucasians or Japanese; no HLA-B*15:02-positive patients with CBZ-SJS/TEN have been reported so far in these groups. HLA-A*31:01-positive patients are at increased risk for CBZ-induced HSS and MPE, and possibly SJS/TEN and acute generalized exanthematous pustulosis (AGEP). This association has been shown in Caucasian, Japanese, Korean, Chinese, and patients of mixed origin; however, HLA-A*31:01 is common in most ethnic groups. Not all patients carrying either risk variant develop an HSR, resulting in a relatively low positive predictive value of the genetic tests. SIGNIFICANCE This review provides the latest update on genetic markers for CBZ HSRs, clinical practice recommendations as a basis for informed decision making regarding the use of HLA-B*15:02 and HLA-A*31:01 genetic testing in patients with an indication for CBZ therapy, and identifies knowledge gaps to guide future research. A PowerPoint slide summarizing this article is available for download in the Supporting Information section here.
Resumo:
Software architecture consists of a set of design choices that can be partially expressed in form of rules that the implementation must conform to. Architectural rules are intended to ensure properties that fulfill fundamental non-functional requirements. Verifying architectural rules is often a non- trivial activity: available tools are often not very usable and support only a narrow subset of the rules that are commonly specified by practitioners. In this paper we present a new highly-readable declarative language for specifying architectural rules. With our approach, users can specify a wide variety of rules using a single uniform notation. Rules can get tested by third-party tools by conforming to pre-defined specification templates. Practitioners can take advantage of the capabilities of a growing number of testing tools without dealing with them directly.
Resumo:
OBJECTIVE To systematically review evidence on genetic variants influencing outcomes during warfarin therapy and provide practice recommendations addressing the key questions: (1) Should genetic testing be performed in patients with an indication for warfarin therapy to improve achievement of stable anticoagulation and reduce adverse effects? (2) Are there subgroups of patients who may benefit more from genetic testing compared with others? (3) How should patients with an indication for warfarin therapy be managed based on their genetic test results? METHODS A systematic literature search was performed for VKORC1 and CYP2C9 and their association with warfarin therapy. Evidence was critically appraised, and clinical practice recommendations were developed based on expert group consensus. RESULTS Testing of VKORC1 (-1639G>A), CYP2C9*2, and CYP2C9*3 should be considered for all patients, including pediatric patients, within the first 2 weeks of therapy or after a bleeding event. Testing for CYP2C9*5, *6, *8, or *11 and CYP4F2 (V433M) is currently not recommended. Testing should also be considered for all patients who are at increased risk of bleeding complications, who consistently show out-of-range international normalized ratios, or suffer adverse events while receiving warfarin. Genotyping results should be interpreted using a pharmacogenetic dosing algorithm to estimate the required dose. SIGNIFICANCE This review provides the latest update on genetic markers for warfarin therapy, clinical practice recommendations as a basis for informed decision making regarding the use of genotype-guided dosing in patients with an indication for warfarin therapy, and identifies knowledge gaps to guide future research.