870 resultados para assessment data
Resumo:
Background: Non-long terminal repeat (non-LTR) retrotransposons have contributed to shaping the structure and function of genomes. In silico and experimental approaches have been used to identify the non-LTR elements of the urochordate Ciona intestinalis. Knowledge of the types and abundance of non-LTR elements in urochordates is a key step in understanding their contribution to the structure and function of vertebrate genomes. Results: Consensus elements phylogenetically related to the I, LINE1, LINE2, LOA and R2 elements of the 14 eukaryotic non-LTR clades are described from C. intestinalis. The ascidian elements showed conservation of both the reverse transcriptase coding sequence and the overall structural organization seen in each clade. The apurinic/apyrimidinic endonuclease and nucleic-acid-binding domains encoded upstream of the reverse transcriptase, and the RNase H and the restriction enzyme-like endonuclease motifs encoded downstream of the reverse transcriptase were identified in the corresponding Ciona families. Conclusions: The genome of C. intestinalis harbors representatives of at least five clades of non-LTR retrotransposons. The copy number per haploid genome of each element is low, less than 100, far below the values reported for vertebrate counterparts but within the range for protostomes. Genomic and sequence analysis shows that the ascidian non-LTR elements are unmethylated and flanked by genomic segments with a gene density lower than average for the genome. The analysis provides valuable data for understanding the evolution of early chordate genomes and enlarges the view on the distribution of the non-LTR retrotransposons in eukaryotes.
Resumo:
This article has been written as a comment to Dr Thomas and Dr Baker's article "Teaching an adult brain new tricks: A critical review of evidence for training-dependent structural plasticity in humans". We deliberately expand on the key question about the biological substrates underlying use-dependent brain plasticity rather than reiterating the authors' main points of criticism already addressed in more general way by previous publications in the field. The focus here is on the following main issues: i) controversial brain plasticity findings in voxel-based morphometry studies are partially due to the strong dependency of the widely used T1-weighted imaging protocol on varying magnetic resonance contrast contributions; ii) novel concepts in statistical analysis allow one to directly infer topological specificity of structural brain changes associated with plasticity. We conclude that iii) voxel-based quantification of relaxometry derived parameter maps could provide a new perspective on use-dependent plasticity by characterisation of brain tissue property changes beyond the estimation of volume and cortical thickness changes. In the relevant sections we respond to the concerns raised by Dr Thomas and Dr Baker from the perspective of the proposed data acquisition and analysis strategy.
Resumo:
BACKGROUND: Human speech is greatly influenced by the speakers' affective state, such as sadness, happiness, grief, guilt, fear, anger, aggression, faintheartedness, shame, sexual arousal, love, amongst others. Attentive listeners discover a lot about the affective state of their dialog partners with no great effort, and without having to talk about it explicitly during a conversation or on the phone. On the other hand, speech dysfunctions, such as slow, delayed or monotonous speech, are prominent features of affective disorders. METHODS: This project was comprised of four studies with healthy volunteers from Bristol (English: n = 117), Lausanne (French: n = 128), Zurich (German: n = 208), and Valencia (Spanish: n = 124). All samples were stratified according to gender, age, and education. The specific study design with different types of spoken text along with repeated assessments at 14-day intervals allowed us to estimate the 'natural' variation of speech parameters over time, and to analyze the sensitivity of speech parameters with respect to form and content of spoken text. Additionally, our project included a longitudinal self-assessment study with university students from Zurich (n = 18) and unemployed adults from Valencia (n = 18) in order to test the feasibility of the speech analysis method in home environments. RESULTS: The normative data showed that speaking behavior and voice sound characteristics can be quantified in a reproducible and language-independent way. The high resolution of the method was verified by a computerized assignment of speech parameter patterns to languages at a success rate of 90%, while the correct assignment to texts was 70%. In the longitudinal self-assessment study we calculated individual 'baselines' for each test person along with deviations thereof. The significance of such deviations was assessed through the normative reference data. CONCLUSIONS: Our data provided gender-, age-, and language-specific thresholds that allow one to reliably distinguish between 'natural fluctuations' and 'significant changes'. The longitudinal self-assessment study with repeated assessments at 1-day intervals over 14 days demonstrated the feasibility and efficiency of the speech analysis method in home environments, thus clearing the way to a broader range of applications in psychiatry. © 2014 S. Karger AG, Basel.
Resumo:
The safe and responsible development of engineered nanomaterials (ENM), nanotechnology-based materials and products, together with the definition of regulatory measures and implementation of "nano"-legislation in Europe require a widely supported scientific basis and sufficient high quality data upon which to base decisions. At the very core of such a scientific basis is a general agreement on key issues related to risk assessment of ENMs which encompass the key parameters to characterise ENMs, appropriate methods of analysis and best approach to express the effect of ENMs in widely accepted dose response toxicity tests. The following major conclusions were drawn: Due to high batch variability of ENMs characteristics of commercially available and to a lesser degree laboratory made ENMs it is not possible to make general statements regarding the toxicity resulting from exposure to ENMs. 1) Concomitant with using the OECD priority list of ENMs, other criteria for selection of ENMs like relevance for mechanistic (scientific) studies or risk assessment-based studies, widespread availability (and thus high expected volumes of use) or consumer concern (route of consumer exposure depending on application) could be helpful. The OECD priority list is focussing on validity of OECD tests. Therefore source material will be first in scope for testing. However for risk assessment it is much more relevant to have toxicity data from material as present in products/matrices to which men and environment are be exposed. 2) For most, if not all characteristics of ENMs, standardized methods analytical methods, though not necessarily validated, are available. Generally these methods are only able to determine one single characteristic and some of them can be rather expensive. Practically, it is currently not feasible to fully characterise ENMs. Many techniques that are available to measure the same nanomaterial characteristic produce contrasting results (e.g. reported sizes of ENMs). It was recommended that at least two complementary techniques should be employed to determine a metric of ENMs. The first great challenge is to prioritise metrics which are relevant in the assessment of biological dose response relations and to develop analytical methods for characterising ENMs in biological matrices. It was generally agreed that one metric is not sufficient to describe fully ENMs. 3) Characterisation of ENMs in biological matrices starts with sample preparation. It was concluded that there currently is no standard approach/protocol for sample preparation to control agglomeration/aggregation and (re)dispersion. It was recommended harmonization should be initiated and that exchange of protocols should take place. The precise methods used to disperse ENMs should be specifically, yet succinctly described within the experimental section of a publication. 4) ENMs need to be characterised in the matrix as it is presented to the test system (in vitro/ in vivo). 5) Alternative approaches (e.g. biological or in silico systems) for the characterisation of ENMS are simply not possible with the current knowledge. Contributors: Iseult Lynch, Hans Marvin, Kenneth Dawson, Markus Berges, Diane Braguer, Hugh J. Byrne, Alan Casey, Gordon Chambers, Martin Clift, Giuliano Elia1, Teresa F. Fernandes, Lise Fjellsbø, Peter Hatto, Lucienne Juillerat, Christoph Klein, Wolfgang Kreyling, Carmen Nickel1, and Vicki Stone.
Resumo:
Purpose To investigate the differences in viscoelastic properties between normal and pathologic Achilles tendons ( AT Achilles tendon s) by using real-time shear-wave elastography ( SWE shear-wave elastography ). Materials and Methods The institutional review board approved this study, and written informed consent was obtained from 25 symptomatic patients and 80 volunteers. One hundred eighty ultrasonographic (US) and SWE shear-wave elastography studies of AT Achilles tendon s without tendonopathy and 30 studies of the middle portion of the AT Achilles tendon in patients with tendonopathy were assessed prospectively. Each study included data sets acquired at B-mode US (tendon morphology and cross-sectional area) and SWE shear-wave elastography (axial and sagittal mean velocity and relative anisotropic coefficient) for two passively mobilized ankle positions. The presence of AT Achilles tendon tears at B-mode US and signal-void areas at SWE shear-wave elastography were noted. Results Significantly lower mean velocity was shown in tendons with tendonopathy than in normal tendons in the relaxed position at axial SWE shear-wave elastography (P < .001) and in the stretched position at sagittal (P < .001) and axial (P = .0026) SWE shear-wave elastography . Tendon softening was a sign of tendonopathy in relaxed AT Achilles tendon s when the mean velocity was less than or equal to 4.06 m · sec(-1) at axial SWE shear-wave elastography (sensitivity, 54.2%; 95% confidence interval [ CI confidence interval ]: 32.8, 74.4; specificity, 91.5%; 95% CI confidence interval : 86.3, 95.1) and less than or equal to 5.70 m · sec(-1) at sagittal SWE shear-wave elastography (sensitivity, 41.7%; 95% CI confidence interval : 22.1, 63.3; specificity, 81.8%; 95% CI confidence interval : 75.3, 87.2) and in stretched AT Achilles tendon s, when the mean velocity was less than or equal to 4.86 m · sec(-1) at axial SWE shear-wave elastography (sensitivity, 66.7%; 95% CI confidence interval : 44.7, 84.3; specificity, 75.6%; 95% CI confidence interval : 68.5, 81.7) and less than or equal to 14.58 m · sec(-1) at sagittal SWE shear-wave elastography (sensitivity, 58.3%; 95% CI confidence interval : 36.7, 77.9; specificity, 83.5%; 95% CI confidence interval : 77.2, 88.7). Anisotropic results were not significantly different between normal and pathologic AT Achilles tendon s. Six of six (100%) partial-thickness tears appeared as signal-void areas at SWE shear-wave elastography . Conclusion Whether the AT Achilles tendon was relaxed or stretched, SWE shear-wave elastography helped to confirm and quantify pathologic tendon softening in patients with tendonopathy in the midportion of the AT Achilles tendon and did not reveal modifications of viscoelastic anisotropy in the tendon. Tendon softening assessed by using SWE shear-wave elastography appeared to be highly specific, but sensitivity was relatively low. © RSNA, 2014.
Resumo:
During 2011, the Iowa Department of Corrections analyzed the impact that attaining a GED has on recidivism – specifically, three-year return-to-prison rate. For those inmates who have a low or moderate risk level (as measured by the LSI-R assessment tool), attaining a GED does not tend to reduce the return-to-prison rate. However, attaining a GED does tend to reduce the rate for higher risk offenders.
Resumo:
OBJECTIVE: To assess the impact of introducing clinical practice guidelines on acute coronary syndrome without persistent ST segment elevation (ACS) on patient initial assessment. DESIGN: Prospective before-after evaluation over a 3-month period. SETTING: The emergency ward of a tertiary teaching hospital. PATIENTS: All consecutive patients with ACS evaluated in the emergency ward over the two 3-month periods. INTERVENTION: Implementation of the practice guidelines, and the addition of a cardiology consultant to the emergency team. MAIN OUTCOME MEASURES: Diagnosis, electrocardiogram interpretation, and risk stratification after the initial evaluation. RESULTS: The clinical characteristics of the 328 and 364 patients evaluated in the emergency ward for suspicion of ACS before and after guideline implementation were similar. Significantly more patients were classified as suffering from atypical chest pain (39.6% versus 47.0%; P = 0.006) after guideline implementation. Guidelines availability was associated with significantly more formal diagnoses (79.9% versus 92.9%; P < 0.0001) and risk stratification (53.7% versus 65.4%, P < 0.0001) at the end of initial assessment. CONCLUSION: Guidelines implementation, along with availability of a cardiology consultant in the emergency room had a positive impact on initial assessment of patients evaluated for suspicion of ACS. It led to increased confidence in diagnosis and stratification by risk, which are the first steps in initiating effective treatment for this common condition.
Resumo:
Concrete will suffer frost damage when saturated and subjected to freezing temperatures. Frost-durable concrete can be produced if a specialized surfactant, also known as an air-entraining admixture (AEA), is added during mixing to stabilize microscopic air voids. Small and well-dispersed air voids are critical to produce frost-resistant concrete. Work completed by Klieger in 1952 found the minimum volume of air required to consistently ensure frost durability in a concrete mixture subjected to rapid freezing and thawing cycles. He suggested that frost durability was provided if 18 percent air was created in the paste. This is the basis of current practice despite the tests being conducted on materials that are no longer available using tests that are different from those in use today. Based on the data presented, it was found that a minimum air content of 3.5 percent in the concrete and 11.0 percent in the paste should yield concrete durable in the ASTM C 666 with modern AEAs and low or no lignosulfonate water reducers (WRs). Limited data suggests that mixtures with a higher dosage of lignosulfonate will need about 1 percent more air in the concrete or 3 percent more air in the paste for the materials and procedures used. A spacing factor of 0.008 in. was still found to be necessary to provide frost durability for the mixtures investigated.
Resumo:
Debris flows are among the most dangerous processes in mountainous areas due to their rapid rate of movement and long runout zone. Sudden and rather unexpected impacts produce not only damages to buildings and infrastructure but also threaten human lives. Medium- to regional-scale susceptibility analyses allow the identification of the most endangered areas and suggest where further detailed studies have to be carried out. Since data availability for larger regions is mostly the key limiting factor, empirical models with low data requirements are suitable for first overviews. In this study a susceptibility analysis was carried out for the Barcelonnette Basin, situated in the southern French Alps. By means of a methodology based on empirical rules for source identification and the empirical angle of reach concept for the 2-D runout computation, a worst-case scenario was first modelled. In a second step, scenarios for high, medium and low frequency events were developed. A comparison with the footprints of a few mapped events indicates reasonable results but suggests a high dependency on the quality of the digital elevation model. This fact emphasises the need for a careful interpretation of the results while remaining conscious of the inherent assumptions of the model used and quality of the input data.
Resumo:
OBJECTIVES: This study aimed to assess the validity of COOP charts in a general population sample, to examine whether illustrations contribute to instrument validity, and to establish general population norms. METHODS: A general population mail survey was conducted among 20-79 years old residents of the Swiss canton of Vaud. Participants were invited to complete COOP charts, the SF-36 Health Survey; they also provided data on health service use in the previous month. Two thirds of the respondents received standard COOP charts, the rest received charts without illustrations. RESULTS: Overall 1250 persons responded (54%). The presence of illustrations did not affect score distributions, except that the illustrated 'physical fitness' chart drew greater non-response (10 vs. 3%, p < 0.001). Validity tests were similar for illustrated and picture-less charts. Factor analysis yielded two principal components, corresponding to physical and mental health. Six COOP charts showed strong and nearly linear relationships with corresponding SF36 scores (all p < 0.001), demonstrating concurrent validity. Similarly, most COOP charts were associated with the use of medical services in the past month. Only the chart on 'social support' partly deviated from construct validity hypotheses. Population norms revealed a generally lower health status in women and an age-related decline in physical health. CONCLUSIONS: COOP charts can be used to assess the health status of a general population. Their validity is good, with the possible exception of the 'social support' chart. The illustrations do not affect the properties of this instrument.
Resumo:
Radioactive soil-contamination mapping and risk assessment is a vital issue for decision makers. Traditional approaches for mapping the spatial concentration of radionuclides employ various regression-based models, which usually provide a single-value prediction realization accompanied (in some cases) by estimation error. Such approaches do not provide the capability for rigorous uncertainty quantification or probabilistic mapping. Machine learning is a recent and fast-developing approach based on learning patterns and information from data. Artificial neural networks for prediction mapping have been especially powerful in combination with spatial statistics. A data-driven approach provides the opportunity to integrate additional relevant information about spatial phenomena into a prediction model for more accurate spatial estimates and associated uncertainty. Machine-learning algorithms can also be used for a wider spectrum of problems than before: classification, probability density estimation, and so forth. Stochastic simulations are used to model spatial variability and uncertainty. Unlike regression models, they provide multiple realizations of a particular spatial pattern that allow uncertainty and risk quantification. This paper reviews the most recent methods of spatial data analysis, prediction, and risk mapping, based on machine learning and stochastic simulations in comparison with more traditional regression models. The radioactive fallout from the Chernobyl Nuclear Power Plant accident is used to illustrate the application of the models for prediction and classification problems. This fallout is a unique case study that provides the challenging task of analyzing huge amounts of data ('hard' direct measurements, as well as supplementary information and expert estimates) and solving particular decision-oriented problems.
Resumo:
The aim of this paper is to analyse how learning assessment, particularly the Continuous Assessment system, has been defined in the Public Administration and Management Diploma Course of the University of Barcelona (Spain). This course was a pioneering experiment at this university in implementing the guidelines of the European Higher Education Area (EHEA), and thus represents a good case study for verifying whether one of the cornerstones of the EHEA has been accomplished with success. Using data obtained from the Teaching Plans elaborated by the lecturers of each subject, we are able to establish that the CA system has been progressively accepted to such an extent that it is now the assessment formula used by practically all of the lecturers, conforming in this way to the protocols laid down by the Faculty of Law in which this diploma course is taught. Nevertheless, we find that high dispersion exists in how Continuous Assessment is actually defined. Indeed, it seems that there is no unified view of how Continuous Assessment should be performed. This dispersion, however, seems to diminish over time and raises some questions about the advisability of agreement on criteria, considering the potential which CA has as a pedagogical tool. Moreover, we find that the Unique Assessment system, which students may also apply for, is an option chosen only by a minority, with lecturers usually defining it as merely a theoretical and/or practical test, of little innovation in relation to traditional tests.
Resumo:
Several methods and algorithms have recently been proposed that allow for the systematic evaluation of simple neuron models from intracellular or extracellular recordings. Models built in this way generate good quantitative predictions of the future activity of neurons under temporally structured current injection. It is, however, difficult to compare the advantages of various models and algorithms since each model is designed for a different set of data. Here, we report about one of the first attempts to establish a benchmark test that permits a systematic comparison of methods and performances in predicting the activity of rat cortical pyramidal neurons. We present early submissions to the benchmark test and discuss implications for the design of future tests and simple neurons models
Resumo:
According to the Centers for Disease Control and Prevention (CDC), the number of adults reporting a disability is expected to increase, along with the need for appropriate medical and public health services. People with disabilities (PWD) face many barriers to good heath, including having overall poorer health, less access toadequate health care, limited access to health insurance, skipping mediacl care because of cost, and engaging in risky health behaviors including smoking and physical inactivity. The goals of the Iowa public health needs assessment were to assess the burden of disability in Iowa counties including health risk factors such as chronic conditions, determine access to preventive health care, and determine the effect of socioeconomic conditions. The state level assessment was based on th 2009-2012 American Community Survey (ACS) and publiccally available Behavioral Risk FactorSurveillance System (BRFSS) 2011 survey. The 2001-2010 combined BRFSS data was used for county level assessment. The needs assessment led us to conclude that adult Iowans with idsabilities face several challenges compared to non-disabled adults. They are more likely to suffer from debilitating chronic conditions and social disparities. Counties with the higher levels of poverty were more likely to have PWD with higher levels of disability related disparities.
Resumo:
The aim of this paper is to analyse how learning assessment, particularly the Continuous Assessment system, has been defined in the Public Administration and Management Diploma Course of the University of Barcelona (Spain). This course was a pioneering experiment at this university in implementing the guidelines of the European Higher Education Area (EHEA), and thus represents a good case study for verifying whether one of the cornerstones of the EHEA has been accomplished with success. Using data obtained from the Teaching Plans elaborated by the lecturers of each subject, we are able to establish that the CA system has been progressively accepted to such an extent that it is now the assessment formula used by practically all of the lecturers, conforming in this way to the protocols laid down by the Faculty of Law in which this diploma course is taught. Nevertheless, we find that high dispersion exists in how Continuous Assessment is actually defined. Indeed, it seems that there is no unified view of how Continuous Assessment should be performed. This dispersion, however, seems to diminish over time and raises some questions about the advisability of agreement on criteria, considering the potential which CA has as a pedagogical tool. Moreover, we find that the Unique Assessment system, which students may also apply for, is an option chosen only by a minority, with lecturers usually defining it as merely a theoretical and/or practical test, of little innovation in relation to traditional tests.