774 resultados para Arts Assessment, Dance, ePortfolio, Digital Portfolios, Authentic Learning
Resumo:
My dissertation emphasizes a cognitive account of multimodality that explicitly integrates experiential knowledge work into the rhetorical pedagogy that informs so many composition and technical communication programs. In these disciplines, multimodality is widely conceived in terms of what Gunther Kress calls “socialsemiotic” modes of communication shaped primarily by culture. In the cognitive and neurolinguistic theories of Vittorio Gallese and George Lakoff, however, multimodality is described as a key characteristic of our bodies’ sensory-motor systems which link perception to action and action to meaning, grounding all communicative acts in knowledge shaped through body-engaged experience. I argue that this “situated” account of cognition – which closely approximates Maurice Merleau-Ponty’s phenomenology of perception, a major framework for my study – has pedagogical precedence in the mimetic pedagogy that informed ancient Sophistic rhetorical training, and I reveal that training’s multimodal dimensions through a phenomenological exegesis of the concept mimesis. Plato’s denigration of the mimetic tradition and his elevation of conceptual contemplation through reason, out of which developed the classic Cartesian separation of mind from body, resulted in a general degradation of experiential knowledge in Western education. But with the recent introduction into college classrooms of digital technologies and multimedia communication tools, renewed emphasis is being placed on the “hands-on” nature of inventive and productive praxis, necessitating a revision of methods of instruction and assessment that have traditionally privileged the acquisition of conceptual over experiential knowledge. The model of multimodality I construct from Merleau-Ponty’s phenomenology, ancient Sophistic rhetorical pedagogy, and current neuroscientific accounts of situated cognition insists on recognizing the significant role knowledges we acquire experientially play in our reading and writing, speaking and listening, discerning and designing practices.
Resumo:
Within academic institutions, writing centers are uniquely situated, socially rich sites for exploring learning and literacy. I examine the work of the Michigan Tech Writing Center's UN 1002 World Cultures study teams primarily because student participants and Writing Center coaches are actively engaged in structuring their own learning and meaning-making processes. My research reveals that learning is closely linked to identity formation and leading the teams is an important component of the coaches' educational experiences. I argue that supporting this type of learning requires an expanded understanding of literacy and significant changes to how learning environments are conceptualized and developed. This ethnographic study draws on data collected from recordings and observations of one semester of team sessions, my own experiences as a team coach and UN 1002 teaching assistant, and interviews with Center coaches prior to their graduation. I argue that traditional forms of assessment and analysis emerging from individualized instruction models of learning cannot fully account for the dense configurations of social interactions identified in the Center's program. Instead, I view the Center as an open system and employ social theories of learning and literacy to uncover how the negotiation of meaning in one context influences and is influenced by structures and interactions within as well as beyond its boundaries. I focus on the program design, its enaction in practice, and how engagement in this type of writing center work influences coaches' learning trajectories. I conclude that, viewed as participation in a community of practice, the learning theory informing the program design supports identity formation —a key aspect of learning as argued by Etienne Wenger (1998). The findings of this study challenge misconceptions of peer learning both in writing centers and higher education that relegate peer tutoring to the role of support for individualized models of learning. Instead, this dissertation calls for consideration of new designs that incorporate peer learning as an integral component. Designing learning contexts that cultivate and support the formation of new identities is complex, involves a flexible and opportunistic design structure, and requires the availability of multiple forms of participation and connections across contexts.
Resumo:
Technology has an important role in children's lives and education. Based on several projects developed with ICT, both in Early Childhood Education (3-6 years old) and Primary Education (6-10 years old), since 1997, the authors argue that research and educational practices need to "go outside", addressing ways to connect technology with outdoor education. The experience with the projects and initiatives developed supported a conceptual framework, developed and discussed with several partners throughout the years and theoretically informed. Three main principles or axis have emerged: strengthening Children's Participation, promoting Critical Citizenship and establishing strong Connections to Pedagogy and Curriculum. In this paper, those axis will be presented and discussed in relation to the challenge posed by Outdoor Education to the way ICT in Early Childhood and Primary Education is understood, promoted and researched. The paper is exploratory, attempting to connect theoretical and conceptual contributions from Early Childhood Pedagogy with contributions from ICT in Education. The research-based knowledge available is still scarce, mostly based on studies developed with other purposes. The paper, therefore, focus the connections and interpellations between concepts established through the theoretical framework and draws on the almost 20 years of experience with large and small scale action-research projects of ICT in schools. The more recent one is already testing the conceptual framework by supporting children in non-formal contexts to explore vineyards and the cycle of wine production with several ICT tools. Approaching Outdoor Education as an arena where pedagogical and cultural dimensions influence decisions and practices, the paper tries to argue that the three axis are relevant in supporting a stronger connection between technology and the outdoor.
Resumo:
Current workplace demands newer forms of literacies that go beyond the ability to decode print. These involve not only competence to operate digital tools, but also the ability to create, represent, and share meaning in different modes and formats; ability to interact, collaborate and communicate effectively using digital tools, and engage critically with technology for developing one’s knowledge, skills, and full participation in civic, economic, and personal matters. This essay examines the application of the ecology of resources (EoR) model for delivering language learning outcomes (in this case, English) through blended classroom environments that use contextually available resources. The author proposes the implementation of the EoR model in blended learning environments to create authentic and sustainable learning environments for skilling courses. Applying the EoR model to Indian skilling instruction contexts, the article discusses how English language and technology literacy can be delivered using contextually available resources through a blended classroom environment. This would facilitate not only acquisition of language and digital literacy outcomes, but also consequent content literacy gain to a certain extent. This would ensure satisfactory achievement of not only communication/language literacy and technological literacy, but also active social participation, lifelong learning, and learner autonomy.
Resumo:
A set of slides used for the RAP SIG event on 19 Jan 2017
Resumo:
Problema. Esta investigación se aproxima al entorno escolar con el propósito de avanzar en la comprensión de los imaginarios de los adolescentes y docentes en torno al cuerpo, la corporalidad y la AF, como un elemento relevante en el diseño de programas y planes efectivos para fomento de la práctica de AF. Objetivo. Analizar los imaginarios sociales de docentes y adolescentes en torno a los conceptos de cuerpo, corporalidad y AF. Métodos. Investigación de corte cualitativo, descriptivo e interpretativo. Se realizaron entrevistas semi-estructuradas a docentes y a estudiantes entre los 12 y 18 años de un colegio público de Bogotá. Se realizó análisis de contenido. Se compararon los resultados de estudiantes por grupos de edades y género. Resultados. Docentes y estudiantes definen el cuerpo a partir de las características biológicas, las diferencias sexuales y las funciones vitales. La definición de corporalidad en los estudiantes se encuentra ligada con la imagen y la apariencia física; los docentes la entienden como la posibilidad de interactuar con el entorno y como la materialización de la existencia. La AF en los estudiantes se asocia con la práctica de ejercicio y deporte, en los docentes se comprende como una práctica de autocuidado que permite el mantenimiento de la salud. Conclusiones. Para promover la AF tempranamente como una experiencia vital es necesario intervenir los espacios escolares. Hay que vincular al cuerpo a los procesos formativos con el propósito de desarrollar la autonomía corporal, este aspecto implica cambios en los currículos.
Resumo:
Objectives: to evaluate the cognitive learning of nursing students in neonatal clinical evaluation from a blended course with the use of computer and laboratory simulation; to compare the cognitive learning of students in a control and experimental group testing the laboratory simulation; and to assess the extracurricular blended course offered on the clinical assessment of preterm infants, according to the students. Method: a quasi-experimental study with 14 Portuguese students, containing pretest, midterm test and post-test. The technologies offered in the course were serious game e-Baby, instructional software of semiology and semiotechnique, and laboratory simulation. Data collection tools developed for this study were used for the course evaluation and characterization of the students. Nonparametric statistics were used: Mann-Whitney and Wilcoxon. Results: the use of validated digital technologies and laboratory simulation demonstrated a statistically significant difference (p = 0.001) in the learning of the participants. The course was evaluated as very satisfactory for them. The laboratory simulation alone did not represent a significant difference in the learning. Conclusions: the cognitive learning of participants increased significantly. The use of technology can be partly responsible for the course success, showing it to be an important teaching tool for innovation and motivation of learning in healthcare.
Resumo:
A utilização generalizada do computador para a automatização das mais diversas tarefas, tem conduzido ao desenvolvimento de aplicações que possibilitam a realização de actividades que até então poderiam não só ser demoradas, como estar sujeitas a erros inerentes à actividade humana. A investigação desenvolvida no âmbito desta tese, tem como objectivo o desenvolvimento de um software e algoritmos que permitam a avaliação e classificação de queijos produzidos na região de Évora, através do processamento de imagens digitais. No decurso desta investigação, foram desenvolvidos algoritmos e metodologias que permitem a identificação dos olhos e dimensões do queijo, a presença de textura na parte exterior do queijo, assim como características relativas à cor do mesmo, permitindo que com base nestes parâmetros possa ser efectuada uma classificação e avaliação do queijo. A aplicação de software, resultou num produto de simples utilização. As fotografias devem respeitar algumas regras simples, sobre as quais se efectuará o processamento e classificação do queijo. ABSTRACT: The widespread use of computers for the automation of repetitive tasks, has resulted in developing applications that allow a range of activities, that until now could not only be time consuming and also subject to errors inherent to human activity, to be performed without or with little human intervention. The research carried out within this thesis, aims to develop a software application and algorithms that enable the assessment and classification of cheeses produced in the region of Évora, by digital images processing. Throughout this research, algorithms and methodologies have been developed that allow the identification of the cheese eyes, the dimensions of the cheese, the presence of texture on the outside of cheese, as well as an analysis of the color, so that, based on these parameters, a classification and evaluation of the cheese can be conducted. The developed software application, is product simple to use, requiring no special computer knowledge. Requires only the acquisition of the photographs following a simple set of rules, based on which it will do the processing and classification of cheese.
Resumo:
This report aims to present the experience lived in the project "The School Pedro II in the Professional Decision of Secondary Education Students" aimed to promote professional student choice for the preparation of secondary to higher education, technical or job market with the integration of the areas of knowledge and ICT. Starting questions: How to awaken in students a vocation for academic life? How establish the connection between what students want to be in the future and to choose when isn’t a university course? How to take into account the factors that interfere in making professional student decision to build his own knowledge about your chosen profession? The experiment was performed at the State School of Elementary and Secondary Education D. Pedro II (Belem of Para State/Brazil), based on the view that knowledge must be represented in a format that requires coordination with the different forms of knowledge and the organization and use of technology. The results show that the tasks performed by students for professional choice provided information about themselves and the professional world. The conceptual map has contributed as a mediating tool of the teaching, learning and assessment and favored interest, autonomy and participation.
Resumo:
Pain is a highly complex phenomenon involving intricate neural systems, whose interactions with other physiological mechanisms are not fully understood. Standard pain assessment methods, relying on verbal communication, often fail to provide reliable and accurate information, which poses a critical challenge in the clinical context. In the era of ubiquitous and inexpensive physiological monitoring, coupled with the advancement of artificial intelligence, these new tools appear as the natural candidates to be tested to address such a challenge. This thesis aims to conduct experimental research to develop digital biomarkers for pain assessment. After providing an overview of the state-of-the-art regarding pain neurophysiology and assessment tools, methods for appropriately conditioning physiological signals and controlling confounding factors are presented. The thesis focuses on three different pain conditions: cancer pain, chronic low back pain, and pain experienced by patients undergoing neurorehabilitation. The approach presented in this thesis has shown promise, but further studies are needed to confirm and strengthen these results. Prior to developing any models, a preliminary signal quality check is essential, along with the inclusion of personal and health information in the models to limit their confounding effects. A multimodal approach is preferred for better performance, although unimodal analysis has revealed interesting aspects of the pain experience. This approach can enrich the routine clinical pain assessment procedure by enabling pain to be monitored when and where it is actually experienced, and without the involvement of explicit communication,. This would improve the characterization of the pain experience, aid in antalgic therapy personalization, and bring timely relief, with the ultimate goal of improving the quality of life of patients suffering from pain.
Resumo:
This thesis investigates the legal, ethical, technical, and psychological issues of general data processing and artificial intelligence practices and the explainability of AI systems. It consists of two main parts. In the initial section, we provide a comprehensive overview of the big data processing ecosystem and the main challenges we face today. We then evaluate the GDPR’s data privacy framework in the European Union. The Trustworthy AI Framework proposed by the EU’s High-Level Expert Group on AI (AI HLEG) is examined in detail. The ethical principles for the foundation and realization of Trustworthy AI are analyzed along with the assessment list prepared by the AI HLEG. Then, we list the main big data challenges the European researchers and institutions identified and provide a literature review on the technical and organizational measures to address these challenges. A quantitative analysis is conducted on the identified big data challenges and the measures to address them, which leads to practical recommendations for better data processing and AI practices in the EU. In the subsequent part, we concentrate on the explainability of AI systems. We clarify the terminology and list the goals aimed at the explainability of AI systems. We identify the reasons for the explainability-accuracy trade-off and how we can address it. We conduct a comparative cognitive analysis between human reasoning and machine-generated explanations with the aim of understanding how explainable AI can contribute to human reasoning. We then focus on the technical and legal responses to remedy the explainability problem. In this part, GDPR’s right to explanation framework and safeguards are analyzed in-depth with their contribution to the realization of Trustworthy AI. Then, we analyze the explanation techniques applicable at different stages of machine learning and propose several recommendations in chronological order to develop GDPR-compliant and Trustworthy XAI systems.
Resumo:
The rapid progression of biomedical research coupled with the explosion of scientific literature has generated an exigent need for efficient and reliable systems of knowledge extraction. This dissertation contends with this challenge through a concentrated investigation of digital health, Artificial Intelligence, and specifically Machine Learning and Natural Language Processing's (NLP) potential to expedite systematic literature reviews and refine the knowledge extraction process. The surge of COVID-19 complicated the efforts of scientists, policymakers, and medical professionals in identifying pertinent articles and assessing their scientific validity. This thesis presents a substantial solution in the form of the COKE Project, an initiative that interlaces machine reading with the rigorous protocols of Evidence-Based Medicine to streamline knowledge extraction. In the framework of the COKE (“COVID-19 Knowledge Extraction framework for next-generation discovery science”) Project, this thesis aims to underscore the capacity of machine reading to create knowledge graphs from scientific texts. The project is remarkable for its innovative use of NLP techniques such as a BERT + bi-LSTM language model. This combination is employed to detect and categorize elements within medical abstracts, thereby enhancing the systematic literature review process. The COKE project's outcomes show that NLP, when used in a judiciously structured manner, can significantly reduce the time and effort required to produce medical guidelines. These findings are particularly salient during times of medical emergency, like the COVID-19 pandemic, when quick and accurate research results are critical.
Resumo:
In recent decades, two prominent trends have influenced the data modeling field, namely network analysis and machine learning. This thesis explores the practical applications of these techniques within the domain of drug research, unveiling their multifaceted potential for advancing our comprehension of complex biological systems. The research undertaken during this PhD program is situated at the intersection of network theory, computational methods, and drug research. Across six projects presented herein, there is a gradual increase in model complexity. These projects traverse a diverse range of topics, with a specific emphasis on drug repurposing and safety in the context of neurological diseases. The aim of these projects is to leverage existing biomedical knowledge to develop innovative approaches that bolster drug research. The investigations have produced practical solutions, not only providing insights into the intricacies of biological systems, but also allowing the creation of valuable tools for their analysis. In short, the achievements are: • A novel computational algorithm to identify adverse events specific to fixed-dose drug combinations. • A web application that tracks the clinical drug research response to SARS-CoV-2. • A Python package for differential gene expression analysis and the identification of key regulatory "switch genes". • The identification of pivotal events causing drug-induced impulse control disorders linked to specific medications. • An automated pipeline for discovering potential drug repurposing opportunities. • The creation of a comprehensive knowledge graph and development of a graph machine learning model for predictions. Collectively, these projects illustrate diverse applications of data science and network-based methodologies, highlighting the profound impact they can have in supporting drug research activities.
Resumo:
Remotely sensed imagery has been widely used for land use/cover classification thanks to the periodic data acquisition and the widespread use of digital image processing systems offering a wide range of classification algorithms. The aim of this work was to evaluate some of the most commonly used supervised and unsupervised classification algorithms under different landscape patterns found in Rondônia, including (1) areas of mid-size farms, (2) fish-bone settlements and (3) a gradient of forest and Cerrado (Brazilian savannah). Comparison with a reference map based on the kappa statistics resulted in good to superior indicators (best results - K-means: k=0.68; k=0.77; k=0.64 and MaxVer: k=0.71; k=0.89; k=0.70 respectively for three areas mentioned). Results show that choosing a specific algorithm requires to take into account both its capacity to discriminate among various spectral signatures under different landscape patterns as well as a cost/benefit analysis considering the different steps performed by the operator performing a land cover/use map. it is suggested that a more systematic assessment of several options of implementation of a specific project is needed prior to beginning a land use/cover mapping job.
Resumo:
Universidade Estadual de Campinas. Faculdade de Educação Física