936 resultados para Learning Tools
Resumo:
El Trabajo Fin de Grado ha consistido en el diseño e implementación de una herramienta para la gestión y administración de los entrenamientos de atletas de deportes individuales. Hasta ahora los deportistas debían gestionar sus entrenamientos a través de hojas de cálculo, teniendo que dedicar tiempo al aprendizaje de herramientas como Microsoft Excel u OpenOffice Excel para personalizar las plantillas y guardar los datos, utilizar otras herramientas como Google Calendar para obtener una visión de un calendario con los entrenamientos realizados o bien utilizar programas hechos a medida para un deporte e incluso para un deportista. El objetivo principal consistía en desarrollar una herramienta que unificara todas las tareas para ofrecer al deportista las funciones de configuración de plantillas, registro y generación de gráficas de los datos registrados y visionado del calendario de entrenamientos de una forma ágil, sencilla e intuitiva, adaptándose a las necesidades de cualquier deporte o deportista. Para alcanzar el objetivo principal realizamos encuestas a atletas de una gran diversidad de deportes individuales, detectando las particularidades de cada deporte y analizando los datos que nos ofrecían para alcanzar el objetivo de diseñar una herramienta versátil que permitiera su uso independientemente de los parámetros que se quisiera registrar de cada entrenamiento. La herramienta generada es una herramienta programada en Java, que ofrece portabilidad a cualquier sistema operativo que lo soporte, sin ser necesario realizar una instalación previa. Es una aplicación plug and play en la que solo se necesita del fichero ejecutable para su funcionamiento; de esta forma facilitamos que el deportista guarde toda la información en muy poco espacio, 6 megabytes aproximadamente, y pueda llevarla a cualquier lado en un pen drive o en sistemas de almacenamiento en la nube. Además, los ficheros en los que se registran los datos son ficheros CSV (valores separados por comas) con un formato estandarizado que permite la exportación a otras herramientas. Como conclusión el atleta ahorra tiempo y esfuerzo en tareas ajenas a la práctica del deporte y disfruta de una herramienta que le permite analizar de diferentes maneras cada uno de los parámetros registrados para ver su evolución y ayudarle a mejorar aquellos aspectos que sean deficientes. ---ABSTRACT---The Final Project consists in the design and implementation of a tool for the management and administration of training logs for individual athletes. Until now athletes had to manage their workouts through spreadsheets, having to spend time in learning tools such as Microsoft Excel or OpenOffice in order to save the data, others tools like Google Calendar to check their training plan or buy specifics programs designed for a specific sport or even for an athlete. The main purpose of this project is to develop an intuitive and straightforward tool that unifies all tasks offering setup functions, data recording, graph generation and training schedule to the athletes. Whit this in mind, we have interviewed athletes from a wide range of individual sports, identifying their specifications and analyzing the data provided to design a flexible tool that registers multitude of training parameters. This tool has been coded in Java, providing portability to any operating system that supports it, without installation being required. It is a plug and play application, that only requires the executable file to start working. Accordingly, athletes can keep all the information in a relative reduced space (aprox 6 megabytes) and save it in a pen drive or in the cloud. In addition, the files whit the stored data are CSV (comma separated value) files whit a standardized format that allows exporting to other tools. Consequently athletes will save time and effort on tasks unrelated to the practice of sports. The new tool will enable them to analyze in detail all the existing data and improve in those areas with development opportunities.
Resumo:
En esta tesis se estudia la vivienda “Tempe à pailla” (1932-1934) construida por Eileen Gray para uso propio, en la localidad francesa de Castellar. La expresión “cuarto propio” en el título del trabajo identifica este proyecto con la búsqueda de un lugar para la autoexperimentación. “Tempe à pailla” es el resultado de una enseñanza autodidacta adquirida por Eileen Gray gracias a la convivencia con los protagonistas del movimiento moderno, en el marco de la Francia de entreguerras. Las experiencias artesanales de E. Gray previas a la arquitectura y los instrumentos de aprendizaje permiten comprender el desarrollo de una mente crítica que cuestiona continuamente lo observado. Por ello, para demostrar la influencia de los postulados de los movimientos contemporáneos en la evolución de sus creaciones, y como preámbulo al análisis de “Tempe à pailla” se realiza un recorrido por las técnicas experimentadas y se analizan dos de sus primeros ejercicios de proyecto: “Vivienda de tres plantas” (1923) y “Maison pour un ingénieur” (1926). La enseñanza adquirida en torno a la herramienta del dibujo analítico y técnico, junto a su investigación en el campo de lo pictórico trasladada al mobiliario realizado en laca, o al diseño y tejido de tapices, constituyen un conjunto de prácticas que desembocan, primero en el acondicionamiento de interiores, para ensayar novedosas composiciones espaciales entre sus objetos y, por último, en el proyecto de arquitectura como disciplina capaz de conjugar todo lo experimentado anteriormente. El binomio Intuición más Método, en todos estos itinerarios prácticos de Eileen Gray, combinado con una mirada atenta hacia las obras, exposiciones, lecturas y revistas especializadas de su tiempo, amalgaman una personalidad compleja que investigó progresivamente, primero en el ámbito de la domesticidad, y en su etapa de madurez en torno a los mínimos de habitar y a los espacios colectivos. El propósito de esta tesis es descubrir cómo los aspectos sociales, artísticos y arquitectónicos del contexto, entrelazados con la propia subjetividad crítica de la arquitecto conforman los fundamentos de esta vivienda y condicionan las decisiones de una mente que proyecta copiando, reelaborando y descartando entre lo conocido y lo aprendido. La elección de esta casa como protagonista de la investigación persigue, en primer lugar, descubrir su relación con los discursos del momento, constituyéndose en objeto arquitectónico paradigmático de un diálogo continuado y abierto. Y en segundo lugar, establecer una síntesis valorativa de la coherencia o la falta de ella en las decisiones objetivas del proyecto, confrontándolas con otros ejemplos. Para alcanzar estos dos objetivos se ha diseccionado la casa desde cinco perspectivas: su vínculo con la preexistencia del lugar, su organización en planta alejada de cualquier tipo normalizado, su vocabulario como reflejo de la modernidad, las relaciones espacio-temporales conseguidas y la sinergia establecida entre el equipamiento doméstico y la arquitectura. Este desarrollo ha hecho posible situar “Tempe á pailla” como un punto de inflexión en la arquitectura de Eileen Gray, y un ejemplo donde fue capaz de anticipar las futuras revisiones del movimiento moderno en aspectos como: la adecuación y empatía de lo construido con el lugar de emplazamiento, el rechazo a las connotaciones del concepto de “machine d’habiter” y la búsqueda de un confort enfatizado por la percepción, la experiencia e incluso los efectos psicológicos del interior doméstico. La relectura de esta casa, enmarcada dentro de la trayectoria práctica de su autora, invita a fijar la mirada en un inexcusable aprendizaje, previo a lo arquitectónico, que aúne la TEORÍA y la PLÁSTICA del momento con ensayos materializados en la PRÁCTICA, demostrando que, para madurar el conocimiento y proyectar con criterio crítico, es imprescindible el factor TIEMPO. ABSTRACT This thesis examines the housing “Tempe à pailla” (1932-1934) built by Eileen Gray for her own use, in the French village of Castellar. The expression “own room” in the title of the work identifies this project as the searching for a place for self experimentation. “Tempe à pailla” is the result of a self-directed learning acquired by the authoress due to coexistence with the protagonists of the modern movement, within the framework of the interwar France. Gray’s craft experiences previous to the architecture along the learning tools allow us to understand the development of a critical mind that questions continuously what she observes. Therefore to demonstrate the influence of the postulates of the contemporary movements in the evolution of her creations, and as a preamble to analysis of “Tempe à pailla”, this thesis makes a tour, along the techniques that she experienced, and studies two of her first exercises of project: “Three-storey housing”(1923) and “Maison pour an ingénieur” (1926). Lesson learned around the analytical tool of architectural drawing, together her research in the field of painting transferred to furniture made in lacquer, or to the design and fabric of tapestries, they constitute a set of craft experiences that lead, first in the conditioning of interiors, rehearsing novel spatial compositions among her objects and finally in the architectural project as a discipline capable of combining everything she learnt previously The binomial Intuition plus Method in all of these practicals Eileen Gray’s itineraries, combined with her look toward the works, exhibitions, readings and journals of her time, become together a complex personality that progressively innovates firstly in the context of domesticity and, in her stage of maturity, on the minimum living and collective spaces. The purpose of this thesis is to discover how the social, artistic and architectural aspects of the context, interlaced with the own critical subjectivity of the architect shape the foundations of this housing and determine the decisions of a mind that projects copying, re-elaborating and rejecting among the aspects known and learned. The choice of this house as the protagonist of the thesis aims, first to discover the relationship with the speeches of her time, becoming a paradigmatic architectural object of a continued and open dialogue. And secondly, to establish a evaluative synthesis of the consistency or lack of it in the project decisions, confronting them with others appropriate examples. To achieve these two objectives the house has been dissected from five perspectives: Its link with the preexistence of the place, its organization on floor away any standard type, its vocabulary as a reflection of modernity reached spatial-temporal relations and the synergy established between the domestic equipment and architecture. The development has made possible to place “Tempe à pailla” as a turning point in the architecture of Eileen Gray, and an example where she was able to anticipate future revisions of the modern movement in aspects such as: adaptation and empathy of the architecture with the site, the rejection of the connotations of the concept of “machine d’habiter” and the pursuit of comfort emphasized by the perception, the experience and even the psychological effects of the domestic interior. The re-reading of this singular and reduced House, framed within the practical trajectory of her authoress, invites to gaze inexcusable learning prior to the architecture, which combines the THEORY and the PLASTIC with trials materialized in PRACTICE, demonstrating that the essential factor to mature knowledge and planning with critical criteria, is the TIME.
Resumo:
Virtual and remote laboratories (VRLs) are e-learning resources that enhance the accessibility of experimental setups providing a distance teaching framework which meets the student's hands-on learning needs. In addition, online collaborative communication represents a practical and a constructivist method to transmit the knowledge and experience from the teacher to students, overcoming physical distance and isolation. This paper describes the extension of two open source tools: (1) the learning management system Moodle, and (2) the tool to create VRLs Easy Java Simulations (EJS). Our extension provides: (1) synchronous collaborative support to any VRL developed with EJS (i.e., any existing VRL written in EJS can be automatically converted into a collaborative lab with no cost), and (2) support to deploy synchronous collaborative VRLs into Moodle. Using our approach students and/or teachers can invite other users enrolled in a Moodle course to a real-time collaborative experimental session, sharing and/or supervising experiences at the same time they practice and explore experiments using VRLs.
Resumo:
Thesis (Ph.D.)--University of Washington, 2016-04
Resumo:
The primary aim of this dissertation is to develop data mining tools for knowledge discovery in biomedical data when multiple (homogeneous or heterogeneous) sources of data are available. The central hypothesis is that, when information from multiple sources of data are used appropriately and effectively, knowledge discovery can be better achieved than what is possible from only a single source. ^ Recent advances in high-throughput technology have enabled biomedical researchers to generate large volumes of diverse types of data on a genome-wide scale. These data include DNA sequences, gene expression measurements, and much more; they provide the motivation for building analysis tools to elucidate the modular organization of the cell. The challenges include efficiently and accurately extracting information from the multiple data sources; representing the information effectively, developing analytical tools, and interpreting the results in the context of the domain. ^ The first part considers the application of feature-level integration to design classifiers that discriminate between soil types. The machine learning tools, SVM and KNN, were used to successfully distinguish between several soil samples. ^ The second part considers clustering using multiple heterogeneous data sources. The resulting Multi-Source Clustering (MSC) algorithm was shown to have a better performance than clustering methods that use only a single data source or a simple feature-level integration of heterogeneous data sources. ^ The third part proposes a new approach to effectively incorporate incomplete data into clustering analysis. Adapted from K-means algorithm, the Generalized Constrained Clustering (GCC) algorithm makes use of incomplete data in the form of constraints to perform exploratory analysis. Novel approaches for extracting constraints were proposed. For sufficiently large constraint sets, the GCC algorithm outperformed the MSC algorithm. ^ The last part considers the problem of providing a theme-specific environment for mining multi-source biomedical data. The database called PlasmoTFBM, focusing on gene regulation of Plasmodium falciparum, contains diverse information and has a simple interface to allow biologists to explore the data. It provided a framework for comparing different analytical tools for predicting regulatory elements and for designing useful data mining tools. ^ The conclusion is that the experiments reported in this dissertation strongly support the central hypothesis.^
Resumo:
The rapid growth of virtualized data centers and cloud hosting services is making the management of physical resources such as CPU, memory, and I/O bandwidth in data center servers increasingly important. Server management now involves dealing with multiple dissimilar applications with varying Service-Level-Agreements (SLAs) and multiple resource dimensions. The multiplicity and diversity of resources and applications are rendering administrative tasks more complex and challenging. This thesis aimed to develop a framework and techniques that would help substantially reduce data center management complexity.^ We specifically addressed two crucial data center operations. First, we precisely estimated capacity requirements of client virtual machines (VMs) while renting server space in cloud environment. Second, we proposed a systematic process to efficiently allocate physical resources to hosted VMs in a data center. To realize these dual objectives, accurately capturing the effects of resource allocations on application performance is vital. The benefits of accurate application performance modeling are multifold. Cloud users can size their VMs appropriately and pay only for the resources that they need; service providers can also offer a new charging model based on the VMs performance instead of their configured sizes. As a result, clients will pay exactly for the performance they are actually experiencing; on the other hand, administrators will be able to maximize their total revenue by utilizing application performance models and SLAs. ^ This thesis made the following contributions. First, we identified resource control parameters crucial for distributing physical resources and characterizing contention for virtualized applications in a shared hosting environment. Second, we explored several modeling techniques and confirmed the suitability of two machine learning tools, Artificial Neural Network and Support Vector Machine, to accurately model the performance of virtualized applications. Moreover, we suggested and evaluated modeling optimizations necessary to improve prediction accuracy when using these modeling tools. Third, we presented an approach to optimal VM sizing by employing the performance models we created. Finally, we proposed a revenue-driven resource allocation algorithm which maximizes the SLA-generated revenue for a data center.^
Resumo:
A number of studies in the areas of Biomedical Engineering and Health Sciences have employed machine learning tools to develop methods capable of identifying patterns in different sets of data. Despite its extinction in many countries of the developed world, Hansen’s disease is still a disease that affects a huge part of the population in countries such as India and Brazil. In this context, this research proposes to develop a method that makes it possible to understand in the future how Hansen’s disease affects facial muscles. By using surface electromyography, a system was adapted so as to capture the signals from the largest possible number of facial muscles. We have first looked upon the literature to learn about the way researchers around the globe have been working with diseases that affect the peripheral neural system and how electromyography has acted to contribute to the understanding of these diseases. From these data, a protocol was proposed to collect facial surface electromyographic (sEMG) signals so that these signals presented a high signal to noise ratio. After collecting the signals, we looked for a method that would enable the visualization of this information in a way to make it possible to guarantee that the method used presented satisfactory results. After identifying the method's efficiency, we tried to understand which information could be extracted from the electromyographic signal representing the collected data. Once studies demonstrating which information could contribute to a better understanding of this pathology were not to be found in literature, parameters of amplitude, frequency and entropy were extracted from the signal and a feature selection was made in order to look for the features that better distinguish a healthy individual from a pathological one. After, we tried to identify the classifier that best discriminates distinct individuals from different groups, and also the set of parameters of this classifier that would bring the best outcome. It was identified that the protocol proposed in this study and the adaptation with disposable electrodes available in market proved their effectiveness and capability of being used in different studies whose intention is to collect data from facial electromyography. The feature selection algorithm also showed that not all of the features extracted from the signal are significant for data classification, with some more relevant than others. The classifier Support Vector Machine (SVM) proved itself efficient when the adequate Kernel function was used with the muscle from which information was to be extracted. Each investigated muscle presented different results when the classifier used linear, radial and polynomial kernel functions. Even though we have focused on Hansen’s disease, the method applied here can be used to study facial electromyography in other pathologies.
Resumo:
A number of studies in the areas of Biomedical Engineering and Health Sciences have employed machine learning tools to develop methods capable of identifying patterns in different sets of data. Despite its extinction in many countries of the developed world, Hansen’s disease is still a disease that affects a huge part of the population in countries such as India and Brazil. In this context, this research proposes to develop a method that makes it possible to understand in the future how Hansen’s disease affects facial muscles. By using surface electromyography, a system was adapted so as to capture the signals from the largest possible number of facial muscles. We have first looked upon the literature to learn about the way researchers around the globe have been working with diseases that affect the peripheral neural system and how electromyography has acted to contribute to the understanding of these diseases. From these data, a protocol was proposed to collect facial surface electromyographic (sEMG) signals so that these signals presented a high signal to noise ratio. After collecting the signals, we looked for a method that would enable the visualization of this information in a way to make it possible to guarantee that the method used presented satisfactory results. After identifying the method's efficiency, we tried to understand which information could be extracted from the electromyographic signal representing the collected data. Once studies demonstrating which information could contribute to a better understanding of this pathology were not to be found in literature, parameters of amplitude, frequency and entropy were extracted from the signal and a feature selection was made in order to look for the features that better distinguish a healthy individual from a pathological one. After, we tried to identify the classifier that best discriminates distinct individuals from different groups, and also the set of parameters of this classifier that would bring the best outcome. It was identified that the protocol proposed in this study and the adaptation with disposable electrodes available in market proved their effectiveness and capability of being used in different studies whose intention is to collect data from facial electromyography. The feature selection algorithm also showed that not all of the features extracted from the signal are significant for data classification, with some more relevant than others. The classifier Support Vector Machine (SVM) proved itself efficient when the adequate Kernel function was used with the muscle from which information was to be extracted. Each investigated muscle presented different results when the classifier used linear, radial and polynomial kernel functions. Even though we have focused on Hansen’s disease, the method applied here can be used to study facial electromyography in other pathologies.
Resumo:
Full paper presented at EC-TEL 2016
Resumo:
The rapid growth of virtualized data centers and cloud hosting services is making the management of physical resources such as CPU, memory, and I/O bandwidth in data center servers increasingly important. Server management now involves dealing with multiple dissimilar applications with varying Service-Level-Agreements (SLAs) and multiple resource dimensions. The multiplicity and diversity of resources and applications are rendering administrative tasks more complex and challenging. This thesis aimed to develop a framework and techniques that would help substantially reduce data center management complexity. We specifically addressed two crucial data center operations. First, we precisely estimated capacity requirements of client virtual machines (VMs) while renting server space in cloud environment. Second, we proposed a systematic process to efficiently allocate physical resources to hosted VMs in a data center. To realize these dual objectives, accurately capturing the effects of resource allocations on application performance is vital. The benefits of accurate application performance modeling are multifold. Cloud users can size their VMs appropriately and pay only for the resources that they need; service providers can also offer a new charging model based on the VMs performance instead of their configured sizes. As a result, clients will pay exactly for the performance they are actually experiencing; on the other hand, administrators will be able to maximize their total revenue by utilizing application performance models and SLAs. This thesis made the following contributions. First, we identified resource control parameters crucial for distributing physical resources and characterizing contention for virtualized applications in a shared hosting environment. Second, we explored several modeling techniques and confirmed the suitability of two machine learning tools, Artificial Neural Network and Support Vector Machine, to accurately model the performance of virtualized applications. Moreover, we suggested and evaluated modeling optimizations necessary to improve prediction accuracy when using these modeling tools. Third, we presented an approach to optimal VM sizing by employing the performance models we created. Finally, we proposed a revenue-driven resource allocation algorithm which maximizes the SLA-generated revenue for a data center.
Resumo:
This study examines the role of visual literacy in learning biology. Biology teachers promote the use of digital images as a learning tool for two reasons: because biology is the most visual of the sciences, and the use of imagery is becoming increasingly important with the advent of bioinformatics; and because studies indicate that this current generation of teenagers have a cognitive structure that is formed through exposure to digital media. On the other hand, there is concern that students are not being exposed enough to the traditional methods of processing biological information - thought to encourage left-brain sequential thinking patterns. Theories of Embodied Cognition point to the importance of hand-drawing for proper assimilation of knowledge, and theories of Multiple Intelligences suggest that some students may learn more easily using traditional pedagogical tools. To test the claim that digital learning tools enhance the acquisition of visual literacy in this generation of biology students, a learning intervention was carried out with 33 students enrolled in an introductory college biology course. The study compared learning outcomes following two types of learning tools. One learning tool was a traditional drawing activity, and the other was an interactive digital activity carried out on a computer. The sample was divided into two random groups, and a crossover design was implemented with two separate interventions. In the first intervention students learned how to draw and label a cell. Group 1 learned the material by computer and Group 2 learned the material by hand-drawing. In the second intervention, students learned how to draw the phases of mitosis, and the two groups were inverted. After each learning activity, students were given a quiz on the material they had learned. Students were also asked to self-evaluate their performance on each quiz, in an attempt to measure their level of metacognition. At the end of the study, they were asked to fill out a questionnaire that was used to measure the level of task engagement the students felt towards the two types of learning activities. In this study, following the first testing phase, the students who learned the material by drawing had a significantly higher average grade on the associated quiz compared to that of those who learned the material by computer. The difference was lost with the second “cross-over” trial. There was no correlation for either group between the grade the students thought they had earned through self-evaluation, and the grade that they received. In terms of different measures of task engagement, there were no significant differences between the two groups. One finding from the study showed a positive correlation between grade and self-reported time spent playing video games, and a negative correlation between grade and self-reported interest in drawing. This study provides little evidence to support claims that the use of digital tools enhances learning, but does provide evidence to support claims that drawing by hand is beneficial for learning biological images. However, the small sample size, limited number and type of learning tasks, and the indirect means of measuring levels of metacognition and task engagement restrict generalisation of these conclusions. Nevertheless, this study indicates that teachers should not use digital learning tools to the exclusion of traditional drawing activities: further studies on the effectiveness of these tools are warranted. Students in this study commented that the computer tool seemed more accurate and detailed - even though the two learning tools carried identical information. Thus there was a mismatch between the perception of the usefulness of computers as a learning tool and the reality, which again points to the need for an objective assessment of their usefulness. Students should be given the opportunity to try out a variety of traditional and digital learning tools in order to address their different learning preferences.
Resumo:
Implemented in the context of Business Administration students enrolled in a college level three year technology program, this research investigated students’ perceptions and academic results concurrent with the implementation of an online web module designed to facilitate student self-study. The students involved in this research were enrolled in a program that, while offering a broad education in business disciplines, specialized in the field of accounting. As a result, students were enrolled in academically rigorous accounting courses in each of the six semesters of the program. The weighting of these accounting courses imposes a significant self-study component – typically matching or exceeding the time spent in class. In this context many of the students enrolled in the Business Administration Program have faced difficulties completing the self-study component of the course effectively as demonstrated in low homework completion rates, low homework grade averages and ultimately low success rates in the courses. In an attempt to address this situation this research studied the implementation of a web-based self-study module. Through this module students could access a number of learning tools that were designed to facilitate the self-study process under the premise that more effective self-study learning tools will help remove obstacles and provide more timely confirmation of learning during student self-study efforts. This research collected data from a single cohort of students drawn from the first three sequential accounting courses of the Business Administration Program. The web-based self-study module was implemented in the third of the three sequential accounting courses. The first two of these courses implemented a traditional manual self-study environment. Data collected from the three accounting courses included homework completion rates, homework, exam and final grades for the respective courses. In addition the web-study module allowed the automatic reporting of student usage of a number of specific online learning tools. To complement the academic data, students were surveyed to gain insight into their perceptions of the effectiveness of the web-based system. The research provided a number of interesting insights. First among these was a confirmation of the importance of the self-study process in the academic achievement of the learners. Regardless of the self-study environment, manual or web-enhanced, a significant positive correlation existed between the students’ self-study results, demonstrated in both homework completion rates and homework averages and the corresponding final grades. These results confirm the importance of self-study found generally in the prevailing academic literature regarding students enrolled in higher education. In addition, the web-enhanced learning environment implemented during the third accounting course coincided with significantly higher homework completion rates and corresponding homework averages: homework completion rates in particular increased from a combined average of 63% in the first two accounting courses to 93% in the web-enhanced context of the third accounting course. Moreover, the homework completion rates of the web-enhanced course were evenly distributed across the cohort of students. A quartile-based analysis was subsequently completed. Quartiles were constructed by ranking the students according to their combined average homework completion rates from the first two manual self-study courses, Accounting I and II. The quartile-based homework completion rates for the manual self-study courses Accounting I and II were subsequently compared to the results these same quartiles of students achieved in the web-based self-study within Accounting III. While the first two courses demonstrated significantly uneven homework completion rates across the quartiles ranging from 31% to 91% homework completion rates, the differences among the four quartiles within the web-enhanced module, with an average homework completion rate of 93%, were statistically insignificant. Congruent with the positive academic results observed in the third, web-enhanced course, through the corresponding survey, students expressed a strong attitude in favor of the online self-study environment. This research was designed to add to the existing research that studies the implementation of learning in an online setting. Specifically, the research was designed to explore a middle ground of online learning – a web-enhanced course – a context that supplements the classroom experience rather than replacing it. The web-enhanced accounting course demonstrated impressive favorable results, both academically and in terms of students' perception of the system; these results suggest that a web-enhanced environment can provide learning tools that facilitate the self-study process while providing a structured learning environment that can help developing learners reach their potential.
Resumo:
Student engagement in a course is an important precursor of academic success. Within the discipline of accounting, successful completion of the self-study component of the course is a critical aspect of student engagement and success. Web-enhanced learning offers an apportunity to provide a structured learning environment with improved access to learning tools and immediate feedback that can improve completion rates of self-study activities. This study evaluated student perceptions and academic results relating to the implementation of a web-enhanced study module in an introductory accounting course in Business Administration department at John Abbott College. The results of this study indicate both a strongly favourable student perception of the web-enhanced study module as well as improved homework completion rates and academic results, particularly among students that had previously performed poorly within a tradional, non web-enhanced seelf study environment.||Résumé : L'engagement des élèves dans un cours est un précurseur important de la réussite scolaire. Dans la discipline de la comptabilité, la réussite de la composante d'auto-apprentissage du cours est un aspect critique de l'engagement et la réussite des élèves. Amélioration de l'apprentissage par Internet offre la possibilité de fournir un environnement d'apprentissage structuré avec un meilleur accès aux outils d'apprentissage et la rétroaction immédiate qui peuvent améliorer les taux d'achèvement des activités d'auto-apprentissage. Cette étude a évalué les perceptions des élèves et les résultats scolaires relatives à la mise en oeuvre d'un module d'étude avec accès Internet à un cours d'introduction à la comptabiblilté dans le département d'administration des affaires au Cégep John Abbott. Les résultats de cette étude indiquent à la fois une perception des étudiants fortement favorable du module d'étude avec accès Internet ansi que l'amélioration des taux d'achèvement des devoirs et des résultats scolaires en particulier chez les élèves qui avaient de mauvais résultats dans un cadre traditionnel, l'environnement d'étude non accès Internet.
Resumo:
Hematological cancers are a heterogeneous family of diseases that can be divided into leukemias, lymphomas, and myelomas, often called “liquid tumors”. Since they cannot be surgically removable, chemotherapy represents the mainstay of their treatment. However, it still faces several challenges like drug resistance and low response rate, and the need for new anticancer agents is compelling. The drug discovery process is long-term, costly, and prone to high failure rates. With the rapid expansion of biological and chemical "big data", some computational techniques such as machine learning tools have been increasingly employed to speed up and economize the whole process. Machine learning algorithms can create complex models with the aim to determine the biological activity of compounds against several targets, based on their chemical properties. These models are defined as multi-target Quantitative Structure-Activity Relationship (mt-QSAR) and can be used to virtually screen small and large chemical libraries for the identification of new molecules with anticancer activity. The aim of my Ph.D. project was to employ machine learning techniques to build an mt-QSAR classification model for the prediction of cytotoxic drugs simultaneously active against 43 hematological cancer cell lines. For this purpose, first, I constructed a large and diversified dataset of molecules extracted from the ChEMBL database. Then, I compared the performance of different ML classification algorithms, until Random Forest was identified as the one returning the best predictions. Finally, I used different approaches to maximize the performance of the model, which achieved an accuracy of 88% by correctly classifying 93% of inactive molecules and 72% of active molecules in a validation set. This model was further applied to the virtual screening of a small dataset of molecules tested in our laboratory, where it showed 100% accuracy in correctly classifying all molecules. This result is confirmed by our previous in vitro experiments.
Resumo:
Dissertação apresentada na Faculdade de Ciências e Tecnologia da Universidade Nova de Lisboa, para a obtenção do grau de Mestre em Engenharia Informática