963 resultados para knowledge testing
Resumo:
I test the presence of hidden information and action in the automobile insurance market using a data set from several Colombian insurers. To identify the presence of hidden information I find a common knowledge variable providing information on policyholder s risk type which is related to both experienced risk and insurance demand and that was excluded from the pricing mechanism. Such unused variable is the record of policyholder s traffic offenses. I find evidence of adverse selection in six of the nine insurance companies for which the test is performed. From the point of view of hidden action I develop a dynamic model of effort in accident prevention given an insurance contract with bonus experience rating scheme and I show that individual accident probability decreases with previous accidents. This result brings a testable implication for the empirical identification of hidden action and based on that result I estimate an econometric model of the time spans between the purchase of the insurance and the first claim, between the first claim and the second one, and so on. I find strong evidence on the existence of unobserved heterogeneity that deceives the testable implication. Once the unobserved heterogeneity is controlled, I find conclusive statistical grounds supporting the presence of moral hazard in the Colombian insurance market.
Resumo:
Recent national developments in the teaching of literacy in the early years in the UK mean that teachers need to have explicit fluent knowledge of the sound structure of the language and its relationship to orthography in order to teach reading effectively. In this study, a group of 38 graduate trainee primary teachers were given a pencil and paper test of phonological awareness as part of a course on teaching literacy. Results from the pencil and paper test were used as the basis of teaching about the sound structure of words. The test was repeated six months later. The results showed that they did not use a consistent system for segmenting words into component sounds. Though there was substantial improvement on second testing, many trainees still did not show evidence that they had yet developed sufficient insights into the sound structure of words to be able to teach children about phonemes with certainty. It is argued that student teachers need substantial explicit training and practice in manipulating the sound structure of words to enable them to teach this aspect of language confidently.
Resumo:
Within generative L2 acquisition research there is a longstanding debate as to what underlies observable differences in L1/L2 knowledge/ performance. On the one hand, Full Accessibility approaches maintain that target L2 syntactic representations (new functional categories and features) are acquirable (e.g., Schwartz & Sprouse, 1996). Conversely, Partial Accessibility approaches claim that L2 variability and/or optionality, even at advanced levels, obtains as a result of inevitable deficits in L2 narrow syntax and is conditioned upon a maturational failure in adulthood to acquire (some) new functional features (e.g., Beck, 1998; Hawkins & Chan, 1997; Hawkins & Hattori, 2006; Tsimpli & Dimitrakopoulou, 2007). The present study tests the predictions of these two sets of approaches with advanced English learners of L2 Brazilian Portuguese (n = 21) in the domain of inflected infinitives. These advanced L2 learners reliably differentiate syntactically between finite verbs, uninflected and inflected infinitives, which, as argued, only supports Full Accessibility approaches. Moreover, we will discuss how testing the domain of inflected infinitives is especially interesting in light of recent proposals that Brazilian Portuguese colloquial dialects no longer actively instantiate them (Lightfoot, 1991; Pires, 2002, 2006; Pires & Rothman, 2009; Rothman, 2007).
Resumo:
Background. There is emerging evidence that context is important for successful transfer of research knowledge into health care practice. The Alberta Context Tool (ACT) is a Canadian developed research-based instrument that assesses 10 modifiable concepts of organizational context considered important for health care professionals’ use of evidence. Swedish and Canadian health care have similarities in terms of organisational and professional aspects, suggesting that the ACT could be used for measuring context in Sweden. This paper reports on the translation of the ACT to Swedish and a testing of preliminary aspects of its validity, acceptability and reliability in Swedish elder care. Methods. The ACT was translated into Swedish and back-translated into English before being pilot tested in ten elder care facilities for response processes validity, acceptability and reliability (Cronbach’s alpha). Subsequently, further modification was performed. Results. In the pilot test, the nurses found the questions easy to respond to (52%) and relevant (65%), yet the questions’ clarity were mainly considered ‘neither clear nor unclear’ (52%). Missing data varied between 0 (0%) and 19 (12%) per item, the most common being 1 missing case per item (15 items). Internal consistency (Cronbach’s Alpha > .70) was reached for 5 out of 8 contextual concepts. Translation and back translation identified 21 linguistic- and semantic related issues and 3 context related deviations, resolved by developers and translators. Conclusion. Modifying an instrument is a detailed process, requiring time and consideration of the linguistic and semantic aspects of the instrument, and understanding of the context where the instrument was developed and where it is to be applied. A team, including the instrument’s developers, translators, and researchers is necessary to ensure a valid translation. This study suggests preliminary validity, reliability and acceptability evidence for the ACT when used with nurses in Swedish elder care.
Resumo:
A system built in terms of autonomous agents may require even greater correctness assurance than one which is merely reacting to the immediate control of its users. Agents make substantial decisions for themselves, so thorough testing is an important consideration. However, autonomy also makes testing harder; by their nature, autonomous agents may react in different ways to the same inputs over time, because, for instance they have changeable goals and knowledge. For this reason, we argue that testing of autonomous agents requires a procedure that caters for a wide range of test case contexts, and that can search for the most demanding of these test cases, even when they are not apparent to the agents’ developers. In this paper, we address this problem, introducing and evaluating an approach to testing autonomous agents that uses evolutionary optimization to generate demanding test cases.
Resumo:
Abstract Background Recent medical and biological technology advances have stimulated the development of new testing systems that have been providing huge, varied amounts of molecular and clinical data. Growing data volumes pose significant challenges for information processing systems in research centers. Additionally, the routines of genomics laboratory are typically characterized by high parallelism in testing and constant procedure changes. Results This paper describes a formal approach to address this challenge through the implementation of a genetic testing management system applied to human genome laboratory. We introduced the Human Genome Research Center Information System (CEGH) in Brazil, a system that is able to support constant changes in human genome testing and can provide patients updated results based on the most recent and validated genetic knowledge. Our approach uses a common repository for process planning to ensure reusability, specification, instantiation, monitoring, and execution of processes, which are defined using a relational database and rigorous control flow specifications based on process algebra (ACP). The main difference between our approach and related works is that we were able to join two important aspects: 1) process scalability achieved through relational database implementation, and 2) correctness of processes using process algebra. Furthermore, the software allows end users to define genetic testing without requiring any knowledge about business process notation or process algebra. Conclusions This paper presents the CEGH information system that is a Laboratory Information Management System (LIMS) based on a formal framework to support genetic testing management for Mendelian disorder studies. We have proved the feasibility and showed usability benefits of a rigorous approach that is able to specify, validate, and perform genetic testing using easy end user interfaces.
Resumo:
In this work we aim to propose a new approach for preliminary epidemiological studies on Standardized Mortality Ratios (SMR) collected in many spatial regions. A preliminary study on SMRs aims to formulate hypotheses to be investigated via individual epidemiological studies that avoid bias carried on by aggregated analyses. Starting from collecting disease counts and calculating expected disease counts by means of reference population disease rates, in each area an SMR is derived as the MLE under the Poisson assumption on each observation. Such estimators have high standard errors in small areas, i.e. where the expected count is low either because of the low population underlying the area or the rarity of the disease under study. Disease mapping models and other techniques for screening disease rates among the map aiming to detect anomalies and possible high-risk areas have been proposed in literature according to the classic and the Bayesian paradigm. Our proposal is approaching this issue by a decision-oriented method, which focus on multiple testing control, without however leaving the preliminary study perspective that an analysis on SMR indicators is asked to. We implement the control of the FDR, a quantity largely used to address multiple comparisons problems in the eld of microarray data analysis but which is not usually employed in disease mapping. Controlling the FDR means providing an estimate of the FDR for a set of rejected null hypotheses. The small areas issue arises diculties in applying traditional methods for FDR estimation, that are usually based only on the p-values knowledge (Benjamini and Hochberg, 1995; Storey, 2003). Tests evaluated by a traditional p-value provide weak power in small areas, where the expected number of disease cases is small. Moreover tests cannot be assumed as independent when spatial correlation between SMRs is expected, neither they are identical distributed when population underlying the map is heterogeneous. The Bayesian paradigm oers a way to overcome the inappropriateness of p-values based methods. Another peculiarity of the present work is to propose a hierarchical full Bayesian model for FDR estimation in testing many null hypothesis of absence of risk.We will use concepts of Bayesian models for disease mapping, referring in particular to the Besag York and Mollié model (1991) often used in practice for its exible prior assumption on the risks distribution across regions. The borrowing of strength between prior and likelihood typical of a hierarchical Bayesian model takes the advantage of evaluating a singular test (i.e. a test in a singular area) by means of all observations in the map under study, rather than just by means of the singular observation. This allows to improve the power test in small areas and addressing more appropriately the spatial correlation issue that suggests that relative risks are closer in spatially contiguous regions. The proposed model aims to estimate the FDR by means of the MCMC estimated posterior probabilities b i's of the null hypothesis (absence of risk) for each area. An estimate of the expected FDR conditional on data (\FDR) can be calculated in any set of b i's relative to areas declared at high-risk (where thenull hypothesis is rejected) by averaging the b i's themselves. The\FDR can be used to provide an easy decision rule for selecting high-risk areas, i.e. selecting as many as possible areas such that the\FDR is non-lower than a prexed value; we call them\FDR based decision (or selection) rules. The sensitivity and specicity of such rule depend on the accuracy of the FDR estimate, the over-estimation of FDR causing a loss of power and the under-estimation of FDR producing a loss of specicity. Moreover, our model has the interesting feature of still being able to provide an estimate of relative risk values as in the Besag York and Mollié model (1991). A simulation study to evaluate the model performance in FDR estimation accuracy, sensitivity and specificity of the decision rule, and goodness of estimation of relative risks, was set up. We chose a real map from which we generated several spatial scenarios whose counts of disease vary according to the spatial correlation degree, the size areas, the number of areas where the null hypothesis is true and the risk level in the latter areas. In summarizing simulation results we will always consider the FDR estimation in sets constituted by all b i's selected lower than a threshold t. We will show graphs of the\FDR and the true FDR (known by simulation) plotted against a threshold t to assess the FDR estimation. Varying the threshold we can learn which FDR values can be accurately estimated by the practitioner willing to apply the model (by the closeness between\FDR and true FDR). By plotting the calculated sensitivity and specicity (both known by simulation) vs the\FDR we can check the sensitivity and specicity of the corresponding\FDR based decision rules. For investigating the over-smoothing level of relative risk estimates we will compare box-plots of such estimates in high-risk areas (known by simulation), obtained by both our model and the classic Besag York Mollié model. All the summary tools are worked out for all simulated scenarios (in total 54 scenarios). Results show that FDR is well estimated (in the worst case we get an overestimation, hence a conservative FDR control) in small areas, low risk levels and spatially correlated risks scenarios, that are our primary aims. In such scenarios we have good estimates of the FDR for all values less or equal than 0.10. The sensitivity of\FDR based decision rules is generally low but specicity is high. In such scenario the use of\FDR = 0:05 or\FDR = 0:10 based selection rule can be suggested. In cases where the number of true alternative hypotheses (number of true high-risk areas) is small, also FDR = 0:15 values are well estimated, and \FDR = 0:15 based decision rules gains power maintaining an high specicity. On the other hand, in non-small areas and non-small risk level scenarios the FDR is under-estimated unless for very small values of it (much lower than 0.05); this resulting in a loss of specicity of a\FDR = 0:05 based decision rule. In such scenario\FDR = 0:05 or, even worse,\FDR = 0:1 based decision rules cannot be suggested because the true FDR is actually much higher. As regards the relative risk estimation, our model achieves almost the same results of the classic Besag York Molliè model. For this reason, our model is interesting for its ability to perform both the estimation of relative risk values and the FDR control, except for non-small areas and large risk level scenarios. A case of study is nally presented to show how the method can be used in epidemiology.
Resumo:
The Székesfehérvár Ruin Garden is a unique assemblage of monuments belonging to the cultural heritage of Hungary due to its important role in the Middle Ages as the coronation and burial church of the Kings of the Hungarian Christian Kingdom. It has been nominated for “National Monument” and as a consequence, its protection in the present and future is required. Moreover, it was reconstructed and expanded several times throughout Hungarian history. By a quick overview of the current state of the monument, the presence of several lithotypes can be found among the remained building and decorative stones. Therefore, the research related to the materials is crucial not only for the conservation of that specific monument but also for other historic structures in Central Europe. The current research is divided in three main parts: i) description of lithologies and their provenance, ii) physical properties testing of historic material and iii) durability tests of analogous stones obtained from active quarries. The survey of the National Monument of Székesfehérvár, focuses on the historical importance and the architecture of the monument, the different construction periods, the identification of the different building stones and their distribution in the remaining parts of the monument and it also included provenance analyses. The second one was the in situ and laboratory testing of physical properties of historic material. As a final phase samples were taken from local quarries with similar physical and mineralogical characteristics to the ones used in the monument. The three studied lithologies are: fine oolitic limestone, a coarse oolitic limestone and a red compact limestone. These stones were used for rock mechanical and durability tests under laboratory conditions. The following techniques were used: a) in-situ: Schmidt Hammer Values, moisture content measurements, DRMS, mapping (construction ages, lithotypes, weathering forms) b) laboratory: petrographic analysis, XRD, determination of real density by means of helium pycnometer and bulk density by means of mercury pycnometer, pore size distribution by mercury intrusion porosimetry and by nitrogen adsorption, water absorption, determination of open porosity, DRMS, frost resistance, ultrasonic pulse velocity test, uniaxial compressive strength test and dynamic modulus of elasticity. The results show that initial uniaxial compressive strength is not necessarily a clear indicator of the stone durability. Bedding and other lithological heterogeneities can influence the strength and durability of individual specimens. In addition, long-term behaviour is influenced by exposure conditions, fabric and, especially, the pore size distribution of each sample. Therefore, a statistic evaluation of the results is highly recommended and they should be evaluated in combination with other investigations on internal structure and micro-scale heterogeneities of the material, such as petrographic observation, ultrasound pulse velocity and porosimetry. Laboratory tests used to estimate the durability of natural stone may give a good guidance to its short-term performance but they should not be taken as an ultimate indication of the long-term behaviour of the stone. The interdisciplinary study of the results confirms that stones in the monument show deterioration in terms of mineralogy, fabric and physical properties in comparison with quarried stones. Moreover stone-testing proves compatibility between quarried and historical stones. Good correlation is observed between the non-destructive-techniques and laboratory tests results which allow us to minimize sampling and assessing the condition of the materials. Concluding, this research can contribute to the diagnostic knowledge for further studies that are needed in order to evaluate the effect of recent and future protective measures.
Resumo:
Despite the many proposed advantages related to nanotechnology, there are increasing concerns as to the potential adverse human health and environmental effects that the production of, and subsequent exposure to nanoparticles (NPs) might pose. In regard to human health, these concerns are founded upon the plethora of knowledge gained from research relating to the effects observed following exposure to environmental air pollution. It is known that increased exposure to environmental air pollution can cause reduced respiratory health, as well as exacerbate pre-existing conditions such as cardiovascular disease and chronic obstructive pulmonary disease. Such disease states have also been associated with exposure to the NP component contained within environmental air pollution, raising concerns as to the effects of NP exposure. It is not only exposure to accidentally produced NPs however, which should be approached with caution. Over the past decades, NPs have been specifically engineered for a wide range of consumer, industrial and technological applications. Due to the inevitable exposure of NPs to humans, owing to their use in such applications, it is therefore imperative that an understanding of how NPs interact with the human body is gained. In vivo research poses a beneficial model for gaining immediate and direct knowledge of human exposure to such xenobiotics. This research outlook however, has numerous limitations. Increased research using in vitro models has therefore been performed, as these models provide an inexpensive and high-throughput alternative to in vivo research strategies. Despite such advantages, there are also various restrictions in regard to in vitro research. Therefore, the aim of this review, in addition to providing a short perspective upon the field of nanotoxicology, is to discuss (1) the advantages and disadvantages of in vitro research and (2) how in vitro research may provide essential information pertaining to the human health risks posed by NP exposure.
Resumo:
When reengineering legacy systems, it is crucial to assess if the legacy behavior has been preserved or how it changed due to the reengineering effort. Ideally if a legacy system is covered by tests, running the tests on the new version can identify potential differences or discrepancies. However, writing tests for an unknown and large system is difficult due to the lack of internal knowledge. It is especially difficult to bring the system to an appropriate state. Our solution is based on the acknowledgment that one of the few trustable piece of information available when approaching a legacy system is the running system itself. Our approach reifies the execution traces and uses logic programming to express tests on them. Thereby it eliminates the need to programatically bring the system in a particular state, and handles the test-writer a high-level abstraction mechanism to query the trace. The resulting system, called TESTLOG, was used on several real-world case studies to validate our claims.
Resumo:
Due to widespread development of anthelmintic resistance in equine parasites, recommendations for their control are currently undergoing marked changes with a shift of emphasis toward more coprological surveillance and reduced treatment intensity. Denmark was the first nation to introduce prescription-only restrictions of anthelmintic drugs in 1999, but other European countries have implemented similar legislations over recent years. A questionnaire survey was performed in 2008 among Danish horse owners to provide a current status of practices and perceptions with relation to parasite control. Questions aimed at describing the current use of coprological surveillance and resulting anthelmintic treatment intensities, evaluating knowledge and perceptions about the importance of various attributes of parasite control, and assessing respondents' willingness to pay for advice and parasite surveillance services from their veterinarians. A total of 1060 respondents completed the questionnaire. A large majority of respondents (71.9%) were familiar with the concept of selective therapy. Results illustrated that the respondents' self-evaluation of their knowledge about parasites and their control associated significantly with their level of interest in the topic and their type of education (P<0.0001). The large majority of respondents either dewormed their horses twice a year and/or performed two fecal egg counts per horse per year. This approach was almost equally pronounced in foals, horses aged 1-3 years old, and adult horses. The respondents rated prevention of parasitic disease and prevention of drug resistance as the most important attributes, while cost and frequent fecal testing were rated least important. Respondents' actual spending on parasite control per horse in the previous year correlated significantly with the amount they declared themselves willing to spend (P<0.0001). However, 44.4% declared themselves willing to pay more than what they were spending. Altogether, results indicate that respondents were generally familiar with equine parasites and the concept of selective therapy, although there was some confusion over the terms small and large strongyles. They used a large degree of fecal surveillance in all age groups, with a majority of respondents sampling and/or treating around twice a year. Finally, respondents appeared willing to spend money on parasite control for their horses. It is of concern that the survey suggested that foals and young horses are treated in a manner very similar to adult horses, which is against current recommendations. Thus, the survey illustrates the importance of clear communication of guidelines for equine parasite control.
Resumo:
Students arrive at classes with a varying social situations and course subject knowledge. Blackboard is a web based course delivery program that permits testing of students before arriving at the first class. A pretest was used to assess preexisting subject knowledge(S) and a survey was used to assess non-subject (N) factors that might impact the student’s final grade. A posttest was administered after all content was delivered and used to access change in S. [See PDF for complete abstract]
Resumo:
We provide a novel search technique which uses a hierarchical model and a mutual information gain heuristic to efficiently prune the search space when localizing faces in images. We show exponential gains in computation over traditional sliding window approaches, while keeping similar performance levels.