765 resultados para Competencies assessment tool
Resumo:
Landslide hazard and risk are growing as a consequence of climate change and demographic pressure. Land‐use planning represents a powerful tool to manage this socio‐economic problem and build sustainable and landslide resilient communities. Landslide inventory maps are a cornerstone of land‐use planning and, consequently, their quality assessment represents a burning issue. This work aimed to define the quality parameters of a landslide inventory and assess its spatial and temporal accuracy with regard to its possible applications to land‐use planning. In this sense, I proceeded according to a two‐steps approach. An overall assessment of the accuracy of data geographic positioning was performed on four case study sites located in the Italian Northern Apennines. The quantification of the overall spatial and temporal accuracy, instead, focused on the Dorgola Valley (Province of Reggio Emilia). The assessment of spatial accuracy involved a comparison between remotely sensed and field survey data, as well as an innovative fuzzylike analysis of a multi‐temporal landslide inventory map. Conversely, long‐ and short‐term landslide temporal persistence was appraised over a period of 60 years with the aid of 18 remotely sensed image sets. These results were eventually compared with the current Territorial Plan for Provincial Coordination (PTCP) of the Province of Reggio Emilia. The outcome of this work suggested that geomorphologically detected and mapped landslides are a significant approximation of a more complex reality. In order to convey to the end‐users this intrinsic uncertainty, a new form of cartographic representation is needed. In this sense, a fuzzy raster landslide map may be an option. With regard to land‐use planning, landslide inventory maps, if appropriately updated, confirmed to be essential decision‐support tools. This research, however, proved that their spatial and temporal uncertainty discourages any direct use as zoning maps, especially when zoning itself is associated to statutory or advisory regulations.
Resumo:
Dysfunction of Autonomic Nervous System (ANS) is a typical feature of chronic heart failure and other cardiovascular disease. As a simple non-invasive technology, heart rate variability (HRV) analysis provides reliable information on autonomic modulation of heart rate. The aim of this thesis was to research and develop automatic methods based on ANS assessment for evaluation of risk in cardiac patients. Several features selection and machine learning algorithms have been combined to achieve the goals. Automatic assessment of disease severity in Congestive Heart Failure (CHF) patients: a completely automatic method, based on long-term HRV was proposed in order to automatically assess the severity of CHF, achieving a sensitivity rate of 93% and a specificity rate of 64% in discriminating severe versus mild patients. Automatic identification of hypertensive patients at high risk of vascular events: a completely automatic system was proposed in order to identify hypertensive patients at higher risk to develop vascular events in the 12 months following the electrocardiographic recordings, achieving a sensitivity rate of 71% and a specificity rate of 86% in identifying high-risk subjects among hypertensive patients. Automatic identification of hypertensive patients with history of fall: it was explored whether an automatic identification of fallers among hypertensive patients based on HRV was feasible. The results obtained in this thesis could have implications both in clinical practice and in clinical research. The system has been designed and developed in order to be clinically feasible. Moreover, since 5-minute ECG recording is inexpensive, easy to assess, and non-invasive, future research will focus on the clinical applicability of the system as a screening tool in non-specialized ambulatories, in order to identify high-risk patients to be shortlisted for more complex investigations.
Resumo:
This thesis regards the study and the development of new cognitive assessment and rehabilitation techniques of subjects with traumatic brain injury (TBI). In particular, this thesis i) provides an overview about the state of art of this new assessment and rehabilitation technologies, ii) suggests new methods for the assessment and rehabilitation and iii) contributes to the explanation of the neurophysiological mechanism that is involved in a rehabilitation treatment. Some chapters provide useful information to contextualize TBI and its outcome; they describe the methods used for its assessment/rehabilitation. The other chapters illustrate a series of experimental studies conducted in healthy subjects and TBI patients that suggest new approaches to assessment and rehabilitation. The new proposed approaches have in common the use of electroencefalografy (EEG). EEG was used in all the experimental studies with a different purpose, such as diagnostic tool, signal to command a BCI-system, outcome measure to evaluate the effects of a treatment, etc. The main achieved results are about: i) the study and the development of a system for the communication with patients with disorders of consciousness. It was possible to identify a paradigm of reliable activation during two imagery task using EEG signal or EEG and NIRS signal; ii) the study of the effects of a neuromodulation technique (tDCS) on EEG pattern. This topic is of great importance and interest. The emerged founding showed that the tDCS can manipulate the cortical network activity and through the research of optimal stimulation parameters, it is possible move the working point of a neural network and bring it in a condition of maximum learning. In this way could be possible improved the performance of a BCI system or to improve the efficacy of a rehabilitation treatment, like neurofeedback.
Resumo:
In food industry, quality assurance requires low cost methods for the rapid assessment of the parameters that affect product stability. Foodstuffs are complex in their structure, mainly composed by gaseous, liquid and solid phases which often coexist in the same product. Special attention is given to water, concerned as natural component of the major food product or as added ingredient of a production process. Particularly water is structurally present in the matrix and not completely available. In this way, water can be present in foodstuff in many different states: as water of crystallization, bound to protein or starch molecules, entrapped in biopolymer networks or adsorbed on solid surfaces of porous food particles. The traditional technique for the assessment of food quality give reliable information but are destructive, time consuming and unsuitable for on line application. The techniques proposed answer to the limited disposition of time and could be able to characterize the main compositional parameters. Dielectric interaction response is mainly related to water and could be useful not only to provide information on the total content but also on the degree of mobility of this ubiquitous molecule in different complex food matrix. In this way the proposal of this thesis is to answer at this need. Dielectric and electric tool can be used for the scope and led us to describe the complex food matrix and predict food characteristic. The thesis is structured in three main part, in the first one some theoretical tools are recalled to well assess the food parameter involved in the quality definition and the techniques able to reply at the problem emerged. The second part explains the research conducted and the experimental plans are illustrated in detail. Finally the last section is left for rapid method easily implementable in an industrial process.
Resumo:
This study concerns teachers’ use of digital technologies in student assessment, and how the learning that is developed through the use of technology in mathematics can be evaluated. Nowadays math teachers use digital technologies in their teaching, but not in student assessment. The activities carried out with technology are seen as ‘extra-curricular’ (by both teachers and students), thus students do not learn what they can do in mathematics with digital technologies. I was interested in knowing the reasons teachers do not use digital technology to assess students’ competencies, and what they would need to be able to design innovative and appropriate tasks to assess students’ learning through digital technology. This dissertation is built on two main components: teachers and task design. I analyze teachers’ practices involving digital technologies with Ruthven’s Structuring Features of Classroom Practice, and what relation these practices have to the types of assessment they use. I study the kinds of assessment tasks teachers design with a DGE (Dynamic Geometry Environment), using Laborde’s categorization of DGE tasks. I consider the competencies teachers aim to assess with these tasks, and how their goals relate to the learning outcomes of the curriculum. This study also develops new directions in finding how to design suitable tasks for student mathematical assessment in a DGE, and it is driven by the desire to know what kinds of questions teachers might be more interested in using. I investigate the kinds of technology-based assessment tasks teachers value, and the type of feedback they give to students. Finally, I point out that the curriculum should include a range of mathematical and technological competencies that involve the use of digital technologies in mathematics, and I evaluate the possibility to take advantage of technology feedback to allow students to continue learning while they are taking a test.
Resumo:
La Quantitative Risk Analysis costituisce un valido strumento per la determinazione del rischio associato ad un’installazione industriale e per la successiva attuazione di piani di emergenza. Tuttavia, la sua applicazione nella progettazione di un lay-out richiede la scelta di un criterio in grado di valutare quale sia la disposizione ottimale al fine di minimizzare il rischio. In tal senso, le numerose procedure esistenti, sebbene efficaci, risultano piuttosto faticose e time-consuming. Nel presente lavoro viene dunque proposto un criterio semplice ed oggettivo per comparare i risultati di QRA applicate a differenti designs. Valutando l’area racchiusa nelle curve iso-rischio, vengono confrontate dapprima le metodologie esistenti per lo studio dell’effetto domino, e successivamente, viene applicata al caso di serbatoi in pressione una procedura integrata di Quantitative Risk Domino Assessment. I risultati ottenuti dimostrano chiaramente come sia possibile ridurre notevolmente il rischio di un’attività industriale agendo sulla disposizione delle apparecchiature, con l’obiettivo di limitare gli effetti di possibili scenari accidentali.
Resumo:
BACKGROUND: Only few standardized apraxia scales are available and they do not cover all domains and semantic features of gesture production. Therefore, the objective of the present study was to evaluate the reliability and validity of a newly developed test of upper limb apraxia (TULIA), which is comprehensive and still short to administer. METHODS: The TULIA consists of 48 items including imitation and pantomime domain of non-symbolic (meaningless), intransitive (communicative) and transitive (tool related) gestures corresponding to 6 subtests. A 6-point scoring method (0-5) was used (score range 0-240). Performance was assessed by blinded raters based on videos in 133 stroke patients, 84 with left hemisphere damage (LHD) and 49 with right hemisphere damage (RHD), as well as 50 healthy subjects (HS). RESULTS: The clinimetric findings demonstrated mostly good to excellent internal consistency, inter- and intra-rater (test-retest) reliability, both at the level of the six subtests and at individual item level. Criterion validity was evaluated by confirming hypotheses based on the literature. Construct validity was demonstrated by a high correlation (r = 0.82) with the De Renzi-test. CONCLUSION: These results show that the TULIA is both a reliable and valid test to systematically assess gesture production. The test can be easily applied and is therefore useful for both research purposes and clinical practice.
Evaluation of perpendicular reflection intensity for assessment of caries lesion activity/inactivity
Resumo:
The aim of this study was to evaluate, using visual assessment, an experimental optical sensor measuring perpendicular reflection intensity (PRI) as an indicator of enamel caries lesion activity/inactivity. Forty teeth with either an active or an inactive enamel lesion were selected from a pool of extracted teeth. Each tooth was cut into halves, with a clinically sound half and a half with a non-cavitated enamel lesion. After gentle plaque removal, the teeth were kept moistened. The lesions were then photographed and a defined measuring site per lesion was chosen and indicated with an arrow on a printout. Independently, the chosen site was visually assessed for lesion activity, and its glossiness was measured with PRI assessment. Surface roughness (SR) was assessed with optical profilometry using a confocal microscope. Visual assessment and PRI were repeated after several weeks and a reliability analysis was performed. For enamel lesions visually scored as active versus inactive, significantly different values were obtained with both PRI and SR. PRI values of the clinically sound control surfaces were significantly different only from active lesions. Generally, inactive lesions had the same glossiness and the same roughness as the sound control surfaces. The reliabilities for visual assessment (? = 0.89) and for PRI (ICC = 0.86) were high. It is concluded that, within the limits of this study, PRI can be regarded as a promising tool for quantitative enamel lesion activity assessment. There is scope and potential for the PRI device to be considerably improved for in vivo use.
Resumo:
Systematic reviews are not an assembly of anecdotes but a distillation of current best available evidence on a particular topic and as such have an important role to play in evidence-based healthcare. A substantial proportion of these systematic reviews focus on interventions, and are able to provide clinicians with the opportunity to understand and translate the best available evidence on the effects of these healthcare interventions into clinical practice. The importance of systematic reviews in summarising and identifying the gaps in evidence which might inform new research initiatives is also widely acknowledged. Their potential impact on practice and research makes their methodological quality especially important as it may directly influence their utility for clinicians, patients and policy makers. The objectives of this study were to identify systematic reviews of oral healthcare interventions published in the Journal of Applied Oral Science (JAOS) and to evaluate their methodological quality using the evaluation tool, AMSTAR.
Resumo:
Java Enterprise Applications (JEAs) are large systems that integrate multiple technologies and programming languages. With the purpose to support the analysis of JEAs we have developed MooseJEE an extension of the \emphMoose environment capable to model the typical elements of JEAs.
Resumo:
Although sustainable land management (SLM) is widely promoted to prevent and mitigate land degradation and desertification, its monitoring and assessment (M&A) has received much less attention. This paper compiles methodological approaches which to date have been little reported in the literature. It draws lessons from these experiences and identifies common elements and future pathways as a basis for a global approach. The paper starts with local level methods where the World Overview of Conservation Approaches and Technologies (WOCAT) framework catalogues SLM case studies. This tool has been included in the local level assessment of Land Degradation Assessment in Drylands (LADA) and in the EU-DESIRE project. Complementary site-based approaches can enhance an ecological process-based understanding of SLM variation. At national and sub-national levels, a joint WOCAT/LADA/DESIRE spatial assessment based on land use systems identifies the status and trends of degradation and SLM, including causes, drivers and impacts on ecosystem services. Expert consultation is combined with scientific evidence and enhanced where necessary with secondary data and indicator databases. At the global level, the Global Environment Facility (GEF) knowledge from the land (KM:Land) initiative uses indicators to demonstrate impacts of SLM investments. Key lessons learnt include the need for a multi-scale approach, making use of common indicators and a variety of information sources, including scientific data and local knowledge through participatory methods. Methodological consistencies allow cross-scale analyses, and findings are analysed and documented for use by decision-makers at various levels. Effective M&A of SLM [e.g. for United Nations Convention to Combat Desertification (UNCCD)] requires a comprehensive methodological framework agreed by the major players.
Resumo:
Noninvasive blood flow measurements based on Doppler ultrasound studies are the main clinical tool for studying the cardiovascular status of fetuses at risk for circulatory compromise. Usually, qualitative analysis of peripheral arteries and in particular clinical situations such as severe growth restriction or volume overload also of venous vessels close to the heart or of flow patterns in the heart is being used to gauge the level of compensation in a fetus. However, quantitative assessment of the driving force of the fetal circulation, the cardiac output remains an elusive goal in fetal medicine. This article reviews the methods for direct and indirect assessment of cardiac function and explains new clinical applications. Part 1 of this review describes the concept of cardiac function and cardiac output and the techniques that have been used to quantify output. Part 2 summarizes the use of arterial and venous Doppler studies in the fetus and gives a detailed description of indirect measurements of cardiac function (like indices derived from the duration of segments of the cardiac cycle) with current examples of their application.
Resumo:
Noninvasive blood flow measurements based on Doppler ultrasound studies are the main clinical tool for studying the cardiovascular status in fetuses at risk for circulatory compromise. Usually, qualitative analysis of peripheral arteries and, in particular clinical situations such as severe growth restriction or volume overload, also of venous vessels close to the heart or of flow patterns in the heart are being used to gauge the level of compensation in a fetus. Quantitative assessment of the driving force of the fetal circulation, the cardiac output, however, remains an elusive goal in fetal medicine. This article reviews the methods for direct and indirect assessment of cardiac function and explains new clinical applications. Part 1 of this review describes the concept of cardiac function and cardiac output and the techniques that have been used to quantify output. Part 2 summarizes the use of arterial and venous Doppler studies in the fetus and gives a detailed description of indirect measures of cardiac function (like indices derived from the duration of segments of the cardiac cycle) with current examples of their application.
Resumo:
Purpose The accuracy, efficiency, and efficacy of four commonly recommended medication safety assessment methodologies were systematically reviewed. Methods Medical literature databases were systematically searched for any comparative study conducted between January 2000 and October 2009 in which at least two of the four methodologies—incident report review, direct observation, chart review, and trigger tool—were compared with one another. Any study that compared two or more methodologies for quantitative accuracy (adequacy of the assessment of medication errors and adverse drug events) efficiency (effort and cost), and efficacy and that provided numerical data was included in the analysis. Results Twenty-eight studies were included in this review. Of these, 22 compared two of the methodologies, and 6 compared three methods. Direct observation identified the greatest number of reports of drug-related problems (DRPs), while incident report review identified the fewest. However, incident report review generally showed a higher specificity compared to the other methods and most effectively captured severe DRPs. In contrast, the sensitivity of incident report review was lower when compared with trigger tool. While trigger tool was the least labor-intensive of the four methodologies, incident report review appeared to be the least expensive, but only when linked with concomitant automated reporting systems and targeted follow-up. Conclusion All four medication safety assessment techniques—incident report review, chart review, direct observation, and trigger tool—have different strengths and weaknesses. Overlap between different methods in identifying DRPs is minimal. While trigger tool appeared to be the most effective and labor-efficient method, incident report review best identified high-severity DRPs.
Resumo:
Early detection is a major goal in the management of malignant melanoma. Besides clinical assessment many noninvasive technologies such as dermoscopy, digital dermoscopy and in vivo laser scanner microscopy are used as additional methods. Herein we tested a system to assess lesional perfusion as a tool for early melanoma detection.