824 resultados para Computer-based assessment
Resumo:
Context: To assess the efficacy of preoperative chemotherapy in Wilms’ tumor patients and explore its true value for specific subgroups. Objectives: In the presence of these controversies, a meta-analysis that examines the efficacy of preoperative chemotherapy in Wilms’ tumor patients and specific subgroups is needed to clarify these issues. The objective of this meta-analysis is to assess the efficacy of preoperative chemotherapy in Wilms’ tumor patients and explore its true value for specific subgroups. Data Sources: Computer-based systematic search with “preoperative chemotherapy”, “Neoadjuvant Therapy” and “Wilms’ tumor” as search terms till January 2013 was performed. Study Selection: No language restrictions were applied. Searches were limited to randomized clinical trials (RCTs) or retrospective studies in human participants under 18 years. A manual examination of references in selected articles was also performed. Data Extraction: Relative Risk (RR) and their 95% Confidence Interval (CI) for Tumor Shrinkage (TS), total Tumor Resection (TR), Event-Free Survival (EFS) and details of subgroup analysis were extracted. Meta-analysis was carried out with the help of the software STATA 11.0. Finally, four original Randomized Clinical Trials (RCTs) and 28 retrospective studies with 2375 patients were included. Results: For preoperative chemotherapy vs. up-front surgery (PC vs. SU) group, the pooled RR was 9.109 for TS (95% CI: 5.109 - 16.241; P < 0.001), 1.291 for TR (95% CI: 1.124 - 1.483; P < 0.001) and 1.101 for EFS (95% CI: 0.980 - 1.238; P = 0.106). For subgroup short course vs. long course (SC vs. LC), the pooled RR was 1.097 for TS (95% CI: 0.784 - 1.563; P = 0.587), 1.197 for TR (95% CI: 0.960 - 1.493; P = 0.110) and 1.006 for EFS (95% CI: 0.910 - 1.250; P = 0.430). Conclusions: Short course preoperative chemotherapy is as effective as long course and preoperative chemotherapy only benefits Wilms’ tumor patients in tumor shrinkage and resection but not event-free survival.
Resumo:
Presentation Research of the Practicum and externships has a long history and involves important aspects for analysis. For example, the recent changes taking place in university grades allot more credits to the Practicum course in all grades, and the Company-University collaboration has exposed the need to study in new learning environments. The rise of ICT practices like ePortfolios, which require technological solutions and methods supported by experimentation, study and research, require particular examination due to the dynamic momentum of technological innovation. Tutoring the Practicum and externships requires remote monitoring and communication using ePortfolios, and competence-based assessment and students’ requirement to provide evidence of learning require the best tutoring methods available with ePortfolios. Among the elements of ePortfolios, eRubrics emerge as a tool for design, communication and competence-assessment. This project aims to consolidate a research line on eRubrics, already undertaken by another project -I+D+i [EDU2010-15432]- in order to expand the network of researchers and Centres of Excellence in Spain and other countries: Harvard University in USA, University of Cologne in Germany, University of Colima in Mexico, Federal University of Parana, University of Santa Catarina in Brasil, and Stockholm University in Sweden(1). This new project [EDU2013-41974-P](2) examines the impact of eRubrics on tutoring and on assessing the Practicum course and externships. Through technology, distance tutoring grants an extra dimension to human communication. New forms of teaching with technological mediation are on the rise and are highly valuable, not only for formal education but especially in both public and private sectors of non-formal education, such as occupational training, unemployed education and public servant training. Objectives Obj. 1. To analyse models of technology used in assessing learning in the Practicum of all grades at Spanish Faculties of Education. Obj. 2. To study models of learning assessment measured by eRubrics in the Practicum. Obj. 3. To analyse communication through eRubrics between students and their tutors at university and practice centres, focusing on students’ understanding of competences and evidences to be assessed in the Practicum. Obj. 4. To design assessment services and products, in order to federate companies and practice centres with training institutions. Among many other features, it has the following functions CoRubric(3) 1. The possibility to assess people, products or services by using rubrics. 2. Ipsative assessment. 3. Designing fully flexible rubrics. 4. Drafting reports and exporting results from eRubrics in a project. 5. Students and teachers talk about the evaluation and application of the criteria Methodology, Methods, Research Instruments or Sources Used The project will use techniques to collect and analyse data from two methodological approaches: 1. In order to meet the first objective, we suggest an initial exploratory descriptive study (Buendía Eisman, Colás Bravo & Hernández Pina, 1998), which involves conducting interviews with Practicum coordinators from all educational grades across Spain, as well as analysing the contents of the teaching guides used in all educational grades across Spain. 55 academic managers were interviewed from about 10 faculties of education in public universities in Spain (20%), and course guides 376 universities from 36 public institutions in Spain (72%) are analyzed. 2. In order to satisfy the second objective, 7 universities have been selected to implement the project two instruments aimed at tutors practice centers and tutors of the faculty. All instruments for collecting data were validated by experts using the Delphi method. The selection of experts had three aspects: years of professional experience, number and quality of publications in the field (Practicum, Educational Technology and Teacher Training), and self-rating of their knowledge. The resulting data was calculated using the Coefficient of Competence (Kcomp) (Martínez, Zúñiga, Sala & Meléndez, 2012). Results in all cases showed an average experience of more than 0.09 points. The two instruments of the first objective were validated during the first half of 2014-15 year, data collected during the second half. And the second objective during the first half of 2015-16 year and data collection for the second half. The set of four instruments (two for each objective 1 and 2) have the same dimensions as each of the sources (Coordinators, course guides, tutors of practice centers and faculty) as they were: a. Institution-Organization, b. Nature of internships, c. Relationship between agents, d. Management Practicum, e. Assessment. F. Technological support, g. Training and h. Assessment Ethics. Conclusions, Expected Outcomes or Findings The first results respond to Objective 1, where we find different conclusions depending on each of the six dimensions. In the case of internal regulations governing the organization and structure of the practicum, we note that most traditional degrees (Elementary and Primary grades) share common internal rules, in particular development methodology and criteria against other grades (Pedagogy and Social Education ). It is also true that the centers of practices in last cases are very different from each other and can be a public institution, a school, a company, a museum, etc. The memory with a 56.34% and 43.67% daily activities are more demands on students in all degrees, Lesson plans 28.18% 19.72% Portfolio 26.7% Didactic units and Others 32,4%. The technical support has been mainly used the platform of the University 47.89% and 57.75% Email, followed by other services and tools 9.86% and rubric platforms 1.41%. The assessment criteria are divided between formal aspects of 12.38%, Written expresión 12.38%, treatment of the subject 14.45%, methodological rigor of work 10.32%, and Level of argument Clarity and relevance of conclusions 10.32%. In general terms, we could say that there is a trend and debate between formative assessment against a accreditation. It has not yet had sufficient time to further study and confront other dimensions and sources of information. We hope to provide more analysis and conclusions in the conference date.
Resumo:
This study took place at one of the intercultural universities (IUs) of Mexico that serve primarily indigenous students. The IUs are pioneers in higher education despite their numerous challenges (Bertely, 1998; Dietz, 2008; Pineda & Landorf, 2010; Schmelkes, 2009). To overcome educational inequalities among their students (Ahuja, Berumen, Casillas, Crispín, Delgado et al., 2004; Schmelkes, 2009), the IUs have embraced performance-based assessment (PBA; Casillas & Santini, 2006). PBA allows a shared model of power and control related to learning and evaluation (Anderson, 1998). While conducting a review on PBA strategies of the IUs, the researcher did not find a PBA instrument with valid and reliable estimates. The purpose of this study was to develop a process to create a PBA instrument, an analytic general rubric, with acceptable validity and reliability estimates to assess students’ attainment of competencies in one of the IU’s majors, Intercultural Development Management. The Human Capabilities Approach (HCA) was the theoretical framework and a sequential mixed method (Creswell, 2003; Teddlie & Tashakkori, 2009) was the research design. IU participants created a rubric during two focus groups, and seven Spanish-speaking professors in Mexico and the US piloted using students’ research projects. The evidence that demonstrates the attainment of competencies at the IU is a complex set of actual, potential and/or desired performances or achievements, also conceptualized as “functional capabilities” (FCs; Walker, 2008), that can be used to develop a rubric. Results indicate that the rubric’s validity and reliability estimates reached acceptable estimates of 80% agreement, surpassing minimum requirements (Newman, Newman, & Newman, 2011). Implications for practice involve the use of PBA within a formative assessment framework, and dynamic inclusion of constituencies. Recommendations for further research include introducing this study’s instrument-development process to other IUs, conducting parallel mixed design studies exploring the intersection between HCA and assessment, and conducting a case study exploring assessment in intercultural settings. Education articulated through the HCA empowers students (Unterhalter & Brighouse, 2007; Walker, 2008). This study aimed to contribute to the quality of student learning assessment at the IUs by providing a participatory process to develop a PBA instrument.
Resumo:
There is a growing societal need to address the increasing prevalence of behavioral health issues, such as obesity, alcohol or drug use, and general lack of treatment adherence for a variety of health problems. The statistics, worldwide and in the USA, are daunting. Excessive alcohol use is the third leading preventable cause of death in the United States (with 79,000 deaths annually), and is responsible for a wide range of health and social problems. On the positive side though, these behavioral health issues (and associated possible diseases) can often be prevented with relatively simple lifestyle changes, such as losing weight with a diet and/or physical exercise, or learning how to reduce alcohol consumption. Medicine has therefore started to move toward finding ways of preventively promoting wellness, rather than solely treating already established illness.^ Evidence-based patient-centered Brief Motivational Interviewing (BMI) interventions have been found particularly effective in helping people find intrinsic motivation to change problem behaviors after short counseling sessions, and to maintain healthy lifestyles over the long-term. Lack of locally available personnel well-trained in BMI, however, often limits access to successful interventions for people in need. To fill this accessibility gap, Computer-Based Interventions (CBIs) have started to emerge. Success of the CBIs, however, critically relies on insuring engagement and retention of CBI users so that they remain motivated to use these systems and come back to use them over the long term as necessary.^ Because of their text-only interfaces, current CBIs can therefore only express limited empathy and rapport, which are the most important factors of health interventions. Fortunately, in the last decade, computer science research has progressed in the design of simulated human characters with anthropomorphic communicative abilities. Virtual characters interact using humans’ innate communication modalities, such as facial expressions, body language, speech, and natural language understanding. By advancing research in Artificial Intelligence (AI), we can improve the ability of artificial agents to help us solve CBI problems.^ To facilitate successful communication and social interaction between artificial agents and human partners, it is essential that aspects of human social behavior, especially empathy and rapport, be considered when designing human-computer interfaces. Hence, the goal of the present dissertation is to provide a computational model of rapport to enhance an artificial agent’s social behavior, and to provide an experimental tool for the psychological theories shaping the model. Parts of this thesis were already published in [LYL+12, AYL12, AL13, ALYR13, LAYR13, YALR13, ALY14].^
Resumo:
In knowledge technology work, as expressed by the scope of this conference, there are a number of communities, each uncovering new methods, theories, and practices. The Library and Information Science (LIS) community is one such community. This community, through tradition and innovation, theories and practice, organizes knowledge and develops knowledge technologies formed by iterative research hewn to the values of equal access and discovery for all. The Information Modeling community is another contributor to knowledge technologies. It concerns itself with the construction of symbolic models that capture the meaning of information and organize it in ways that are computer-based, but human understandable. A recent paper that examines certain assumptions in information modeling builds a bridge between these two communities, offering a forum for a discussion on common aims from a common perspective. In a June 2000 article, Parsons and Wand separate classes from instances in information modeling in order to free instances from what they call the “tyranny” of classes. They attribute a number of problems in information modeling to inherent classification – or the disregard for the fact that instances can be conceptualized independent of any class assignment. By faceting instances from classes, Parsons and Wand strike a sonorous chord with classification theory as understood in LIS. In the practice community and in the publications of LIS, faceted classification has shifted the paradigm of knowledge organization theory in the twentieth century. Here, with the proposal of inherent classification and the resulting layered information modeling, a clear line joins both the LIS classification theory community and the information modeling community. Both communities have their eyes turned toward networked resource discovery, and with this conceptual conjunction a new paradigmatic conversation can take place. Parsons and Wand propose that the layered information model can facilitate schema integration, schema evolution, and interoperability. These three spheres in information modeling have their own connotation, but are not distant from the aims of classification research in LIS. In this new conceptual conjunction, established by Parsons and Ward, information modeling through the layered information model, can expand the horizons of classification theory beyond LIS, promoting a cross-fertilization of ideas on the interoperability of subject access tools like classification schemes, thesauri, taxonomies, and ontologies. This paper examines the common ground between the layered information model and faceted classification, establishing a vocabulary and outlining some common principles. It then turns to the issue of schema and the horizons of conventional classification and the differences between Information Modeling and Library and Information Science. Finally, a framework is proposed that deploys an interpretation of the layered information modeling approach in a knowledge technologies context. In order to design subject access systems that will integrate, evolve and interoperate in a networked environment, knowledge organization specialists must consider a semantic class independence like Parsons and Wand propose for information modeling.
Resumo:
An efficient expert system for the power transformer condition assessment is presented in this paper. Through the application of Duval`s triangle and the method of the gas ratios a first assessment of the transformer condition is obtained in the form of a dissolved gas analysis (DGA) diagnosis according IEC 60599. As a second step, a knowledge mining procedure is performed, by conducting surveys whose results are fed into a first Type-2 Fuzzy Logic System (T2-FLS), in order to initially evaluate the condition of the equipment taking only the results of dissolved gas analysis into account. The output of this first T2-FLS is used as the input of a second T2-FLS, which additionally weighs up the condition of the paper-oil system. The output of this last T2-FLS is given in terms of words easily understandable by the maintenance personnel. The proposed assessing methodology has been validated for several cases of transformers in service. (C) 2010 Elsevier Ltd. All rights reserved.
Resumo:
The continuous improvement of Ethernet technologies is boosting the eagerness of extending their use to cover factory-floor distributed real time applications. Indeed, it is remarkable the considerable amount of research work that has been devoted to the timing analysis of Ethernet-based technologies in the past few years. It happens, however, that the majority of those works are restricted to the analysis of sub-sets of the overall computing and communication system, thus without addressing timeliness in a holistic fashion. To this end, we address an approach, based on simulation, aiming at extracting temporal properties of commercial-off-the-shelf (COTS) Ethernet-based factory-floor distributed systems. This framework is applied to a specific COTS technology, Ethernet/IP. We reason about the modeling and simulation of Ethernet/IP-based systems, and on the use of statistical analysis techniques to provide useful results on timeliness. The approach is part of a wider framework related to the research project INDEPTH NDustrial-Ethernet ProTocols under Holistic analysis.
Resumo:
Based on the report for the “Project III” unit of the PhD programme on Technology Assessment under the supervision of Prof. António B. Moniz. This report was discussed also at the 2nd Winter School on Technology Assessment held at Universidade Nova de Lisboa, Caparica Campus, Portugal on December 2011.
Resumo:
To make a comprehensive evaluation of organ-specific out-of-field doses using Monte Carlo (MC) simulations for different breast cancer irradiation techniques and to compare results with a commercial treatment planning system (TPS). Three breast radiotherapy techniques using 6MV tangential photon beams were compared: (a) 2DRT (open rectangular fields), (b) 3DCRT (conformal wedged fields), and (c) hybrid IMRT (open conformal+modulated fields). Over 35 organs were contoured in a whole-body CT scan and organ-specific dose distributions were determined with MC and the TPS. Large differences in out-of-field doses were observed between MC and TPS calculations, even for organs close to the target volume such as the heart, the lungs and the contralateral breast (up to 70% difference). MC simulations showed that a large fraction of the out-of-field dose comes from the out-of-field head scatter fluence (>40%) which is not adequately modeled by the TPS. Based on MC simulations, the 3DCRT technique using external wedges yielded significantly higher doses (up to a factor 4-5 in the pelvis) than the 2DRT and the hybrid IMRT techniques which yielded similar out-of-field doses. In sharp contrast to popular belief, the IMRT technique investigated here does not increase the out-of-field dose compared to conventional techniques and may offer the most optimal plan. The 3DCRT technique with external wedges yields the largest out-of-field doses. For accurate out-of-field dose assessment, a commercial TPS should not be used, even for organs near the target volume (contralateral breast, lungs, heart).
Resumo:
The purpose of this work is to develop a web based decision support system, based onfuzzy logic, to assess the motor state of Parkinson patients on their performance in onscreenmotor tests in a test battery on a hand computer. A set of well defined rules, basedon an expert’s knowledge, were made to diagnose the current state of the patient. At theend of a period, an overall score is calculated which represents the overall state of thepatient during the period. Acceptability of the rules is based on the absolute differencebetween patient’s own assessment of his condition and the diagnosed state. Anyinconsistency can be tracked by highlighted as an alert in the system. Graphicalpresentation of data aims at enhanced analysis of patient’s state and performancemonitoring by the clinic staff. In general, the system is beneficial for the clinic staff,patients, project managers and researchers.
Resumo:
The rapid growth of urban areas has a significant impact on traffic and transportation systems. New management policies and planning strategies are clearly necessary to cope with the more than ever limited capacity of existing road networks. The concept of Intelligent Transportation System (ITS) arises in this scenario; rather than attempting to increase road capacity by means of physical modifications to the infrastructure, the premise of ITS relies on the use of advanced communication and computer technologies to handle today’s traffic and transportation facilities. Influencing users’ behaviour patterns is a challenge that has stimulated much research in the ITS field, where human factors start gaining great importance to modelling, simulating, and assessing such an innovative approach. This work is aimed at using Multi-agent Systems (MAS) to represent the traffic and transportation systems in the light of the new performance measures brought about by ITS technologies. Agent features have good potentialities to represent those components of a system that are geographically and functionally distributed, such as most components in traffic and transportation. A BDI (beliefs, desires, and intentions) architecture is presented as an alternative to traditional models used to represent the driver behaviour within microscopic simulation allowing for an explicit representation of users’ mental states. Basic concepts of ITS and MAS are presented, as well as some application examples related to the subject. This has motivated the extension of an existing microscopic simulation framework to incorporate MAS features to enhance the representation of drivers. This way demand is generated from a population of agents as the result of their decisions on route and departure time, on a daily basis. The extended simulation model that now supports the interaction of BDI driver agents was effectively implemented, and different experiments were performed to test this approach in commuter scenarios. MAS provides a process-driven approach that fosters the easy construction of modular, robust, and scalable models, characteristics that lack in former result-driven approaches. Its abstraction premises allow for a closer association between the model and its practical implementation. Uncertainty and variability are addressed in a straightforward manner, as an easier representation of humanlike behaviours within the driver structure is provided by cognitive architectures, such as the BDI approach used in this work. This way MAS extends microscopic simulation of traffic to better address the complexity inherent in ITS technologies.
Resumo:
Purpose: To determine palpebral dimensions and development in Brazilian children using digital images. Methods: An observational study was performed measuring eyelid angles, palpebral fissure area and interpupillary distance in 220 children aged from 4 to 72 months. Digital images were obtained with a Sony Lithium movie camera (Sony DCR-TRV110, Brazil) in frontal view from awake children in primary ocular position; the object of observation was located at pupil height. The images were saved to tape, transferred to a Macintosh G4 (Apple Computer Inc., USA) computer and processed using NIH 1.58 software (NTIS, 5285 Port Royal Rd., Springfield, VA 22161, USA). Data were submitted to statistical analysis. Results: All parameters studied increased with age. The outer palpebral angle was greater than the inner, and palpebral fissure and angles showed greater changes between 4 and 5 months old and at around 24 to 36 months. Conclusion: There are significant variations in palpebral dimensions in children under 72 months old, especially around 24 to 36 months. Copyright © 2006 Informa Healthcare.
Resumo:
The present study aimed at providing conditions for the assessment of color discrimination in children using a modified version of the Cambridge Colour Test (CCT, Cambridge Research Systems Ltd., Rochester, UK). Since the task of indicating the gap of the Landolt C used in that test proved counterintuitive and/or difficult for young children to understand, we changed the target Stimulus to a patch of color approximately the size of the Landolt C gap (about 7 degrees Of Visual angle at 50 cm from the monitor). The modifications were performed for the CCT Trivector test which measures color discrimination for the protan, deutan and tritan confusion lines. Experiment I Sought to evaluate the correspondence between the CCT and the child-friendly adaptation with adult subjects (n = 29) with normal color vision. Results showed good agreement between the two test versions. Experiment 2 tested the child-friendly software with children 2 to 7 years old (n = 25) using operant training techniques for establishing and maintaining the subjects` performance. Color discrimination thresholds were progressively lower as age increased within the age range tested (2 to 30 years old), and the data-including those obtained for children-fell within the range of thresholds previously obtained for adults with the CCT. The protan and deutan thresholds were consistently lower than tritan thresholds, a pattern repeatedly observed in adults tested with the CCT. The results demonstrate that the test is fit for assessment of color discrimination in young children and may be a useful tool for the establishment of color vision thresholds during development.
Resumo:
Abstract Background Educational computer games are examples of computer-assisted learning objects, representing an educational strategy of growing interest. Given the changes in the digital world over the last decades, students of the current generation expect technology to be used in advancing their learning requiring a need to change traditional passive learning methodologies to an active multisensory experimental learning methodology. The objective of this study was to compare a computer game-based learning method with a traditional learning method, regarding learning gains and knowledge retention, as means of teaching head and neck Anatomy and Physiology to Speech-Language and Hearing pathology undergraduate students. Methods Students were randomized to participate to one of the learning methods and the data analyst was blinded to which method of learning the students had received. Students’ prior knowledge (i.e. before undergoing the learning method), short-term knowledge retention and long-term knowledge retention (i.e. six months after undergoing the learning method) were assessed with a multiple choice questionnaire. Students’ performance was compared considering the three moments of assessment for both for the mean total score and for separated mean scores for Anatomy questions and for Physiology questions. Results Students that received the game-based method performed better in the pos-test assessment only when considering the Anatomy questions section. Students that received the traditional lecture performed better in both post-test and long-term post-test when considering the Anatomy and Physiology questions. Conclusions The game-based learning method is comparable to the traditional learning method in general and in short-term gains, while the traditional lecture still seems to be more effective to improve students’ short and long-term knowledge retention.