569 resultados para Learning development
Resumo:
Deep learning methods are extremely promising machine learning tools to analyze neuroimaging data. However, their potential use in clinical settings is limited because of the existing challenges of applying these methods to neuroimaging data. In this study, first a data leakage type caused by slice-level data split that is introduced during training and validation of a 2D CNN is surveyed and a quantitative assessment of the model’s performance overestimation is presented. Second, an interpretable, leakage-fee deep learning software written in a python language with a wide range of options has been developed to conduct both classification and regression analysis. The software was applied to the study of mild cognitive impairment (MCI) in patients with small vessel disease (SVD) using multi-parametric MRI data where the cognitive performance of 58 patients measured by five neuropsychological tests is predicted using a multi-input CNN model taking brain image and demographic data. Each of the cognitive test scores was predicted using different MRI-derived features. As MCI due to SVD has been hypothesized to be the effect of white matter damage, DTI-derived features MD and FA produced the best prediction outcome of the TMT-A score which is consistent with the existing literature. In a second study, an interpretable deep learning system aimed at 1) classifying Alzheimer disease and healthy subjects 2) examining the neural correlates of the disease that causes a cognitive decline in AD patients using CNN visualization tools and 3) highlighting the potential of interpretability techniques to capture a biased deep learning model is developed. Structural magnetic resonance imaging (MRI) data of 200 subjects was used by the proposed CNN model which was trained using a transfer learning-based approach producing a balanced accuracy of 71.6%. Brain regions in the frontal and parietal lobe showing the cerebral cortex atrophy were highlighted by the visualization tools.
Resumo:
In recent years we have witnessed important changes: the Second Quantum Revolution is in the spotlight of many countries, and it is creating a new generation of technologies. To unlock the potential of the Second Quantum Revolution, several countries have launched strategic plans and research programs that finance and set the pace of research and development of these new technologies (like the Quantum Flagship, the National Quantum Initiative Act and so on). The increasing pace of technological changes is also challenging science education and institutional systems, requiring them to help to prepare new generations of experts. This work is placed within physics education research and contributes to the challenge by developing an approach and a course about the Second Quantum Revolution. The aims are to promote quantum literacy and, in particular, to value from a cultural and educational perspective the Second Revolution. The dissertation is articulated in two parts. In the first, we unpack the Second Quantum Revolution from a cultural perspective and shed light on the main revolutionary aspects that are elevated to the rank of principles implemented in the design of a course for secondary school students, prospective and in-service teachers. The design process and the educational reconstruction of the activities are presented as well as the results of a pilot study conducted to investigate the impact of the approach on students' understanding and to gather feedback to refine and improve the instructional materials. The second part consists of the exploration of the Second Quantum Revolution as a context to introduce some basic concepts of quantum physics. We present the results of an implementation with secondary school students to investigate if and to what extent external representations could play any role to promote students’ understanding and acceptance of quantum physics as a personal reliable description of the world.
Resumo:
Machine Learning makes computers capable of performing tasks typically requiring human intelligence. A domain where it is having a considerable impact is the life sciences, allowing to devise new biological analysis protocols, develop patients’ treatments efficiently and faster, and reduce healthcare costs. This Thesis work presents new Machine Learning methods and pipelines for the life sciences focusing on the unsupervised field. At a methodological level, two methods are presented. The first is an “Ab Initio Local Principal Path” and it is a revised and improved version of a pre-existing algorithm in the manifold learning realm. The second contribution is an improvement over the Import Vector Domain Description (one-class learning) through the Kullback-Leibler divergence. It hybridizes kernel methods to Deep Learning obtaining a scalable solution, an improved probabilistic model, and state-of-the-art performances. Both methods are tested through several experiments, with a central focus on their relevance in life sciences. Results show that they improve the performances achieved by their previous versions. At the applicative level, two pipelines are presented. The first one is for the analysis of RNA-Seq datasets, both transcriptomic and single-cell data, and is aimed at identifying genes that may be involved in biological processes (e.g., the transition of tissues from normal to cancer). In this project, an R package is released on CRAN to make the pipeline accessible to the bioinformatic Community through high-level APIs. The second pipeline is in the drug discovery domain and is useful for identifying druggable pockets, namely regions of a protein with a high probability of accepting a small molecule (a drug). Both these pipelines achieve remarkable results. Lastly, a detour application is developed to identify the strengths/limitations of the “Principal Path” algorithm by analyzing Convolutional Neural Networks induced vector spaces. This application is conducted in the music and visual arts domains.
Resumo:
In medicine, innovation depends on a better knowledge of the human body mechanism, which represents a complex system of multi-scale constituents. Unraveling the complexity underneath diseases proves to be challenging. A deep understanding of the inner workings comes with dealing with many heterogeneous information. Exploring the molecular status and the organization of genes, proteins, metabolites provides insights on what is driving a disease, from aggressiveness to curability. Molecular constituents, however, are only the building blocks of the human body and cannot currently tell the whole story of diseases. This is why nowadays attention is growing towards the contemporary exploitation of multi-scale information. Holistic methods are then drawing interest to address the problem of integrating heterogeneous data. The heterogeneity may derive from the diversity across data types and from the diversity within diseases. Here, four studies conducted data integration using customly designed workflows that implement novel methods and views to tackle the heterogeneous characterization of diseases. The first study devoted to determine shared gene regulatory signatures for onco-hematology and it showed partial co-regulation across blood-related diseases. The second study focused on Acute Myeloid Leukemia and refined the unsupervised integration of genomic alterations, which turned out to better resemble clinical practice. In the third study, network integration for artherosclerosis demonstrated, as a proof of concept, the impact of network intelligibility when it comes to model heterogeneous data, which showed to accelerate the identification of new potential pharmaceutical targets. Lastly, the fourth study introduced a new method to integrate multiple data types in a unique latent heterogeneous-representation that facilitated the selection of important data types to predict the tumour stage of invasive ductal carcinoma. The results of these four studies laid the groundwork to ease the detection of new biomarkers ultimately beneficial to medical practice and to the ever-growing field of Personalized Medicine.
Resumo:
There is an urgent need to make drug discovery cheaper and faster. This will enable the development of treatments for diseases currently neglected for economic reasons, such as tropical and orphan diseases, and generally increase the supply of new drugs. Here, we report the Robot Scientist 'Eve' designed to make drug discovery more economical. A Robot Scientist is a laboratory automation system that uses artificial intelligence (AI) techniques to discover scientific knowledge through cycles of experimentation. Eve integrates and automates library-screening, hit-confirmation, and lead generation through cycles of quantitative structure activity relationship learning and testing. Using econometric modelling we demonstrate that the use of AI to select compounds economically outperforms standard drug screening. For further efficiency Eve uses a standardized form of assay to compute Boolean functions of compound properties. These assays can be quickly and cheaply engineered using synthetic biology, enabling more targets to be assayed for a given budget. Eve has repositioned several drugs against specific targets in parasites that cause tropical diseases. One validated discovery is that the anti-cancer compound TNP-470 is a potent inhibitor of dihydrofolate reductase from the malaria-causing parasite Plasmodium vivax.
Resumo:
Universidade Estadual de Campinas . Faculdade de Educação Física
Resumo:
Rangel EM, Mendes IA, Carnio EC, Marchi Alves LM, Godoy S, Crispim JA. Development, implementation, and assessment of a distance module in endocrine physiology. Adv Physiol Educ 34: 70-74, 2010; doi: 10.1152/advan.00070.2009.-This study aimed to develop, implement, and assess a distance module in endocrine physiology in TelEduc for undergraduate nursing students from a public university in Brazil, with a sample size of 44 students. Stage 1 consisted of the development of the module, through the process of creating a distance course by means of the Web. Stage 2 was the planning of the module's practical functioning, and stage 3 was the planning of student evaluations. In the experts' assessment, the module complied with pedagogical and technical requirements most of the time. In the practical functioning stage, 10 h were dedicated for on-site activities and 10 h for distance activities. Most students (93.2%) were women between 19 and 23 yr of age (75%). The internet was the most used means to remain updated for 23 students (59.0%), and 30 students (68.2%) accessed it from the teaching institution. A personal computer was used by 23 students (56.1%), and most of them (58.1%) learned to use it alone. Access to a forum was more dispersed (variation coefficient: 86.80%) than access to chat (variation coefficient: 65.14%). Average participation was 30 students in forums and 22 students in the chat. Students' final grades in the module averaged 8.5 (SD: 1.2). TelEduc was shown to be efficient in supporting the teaching- learning process of endocrine physiology.
Resumo:
Three main models of parameter setting have been proposed: the Variational model proposed by Yang (2002; 2004), the Structured Acquisition model endorsed by Baker (2001; 2005), and the Very Early Parameter Setting (VEPS) model advanced by Wexler (1998). The VEPS model contends that parameters are set early. The Variational model supposes that children employ statistical learning mechanisms to decide among competing parameter values, so this model anticipates delays in parameter setting when critical input is sparse, and gradual setting of parameters. On the Structured Acquisition model, delays occur because parameters form a hierarchy, with higher-level parameters set before lower-level parameters. Assuming that children freely choose the initial value, children sometimes will miss-set parameters. However when that happens, the input is expected to trigger a precipitous rise in one parameter value and a corresponding decline in the other value. We will point to the kind of child language data that is needed in order to adjudicate among these competing models.
Resumo:
When English-learning children begin using words the majority of their early utterances (around 80%) are nouns. Compared to nouns, there is a paucity of verbs or non-verb relational words, such as 'up' meaning 'pick me up'. The primary explanations to account for these differences in use either argue in support of a 'cognitive account', which claims that verbs entail more cognitive complexity than nouns, or they provide evidence challenging this account. In this paper I propose an additional explanation for children's noun/verb asymmetry. Presenting a 'multi-modal account' of word-learning based on children's gesture and word combinations, I show that at the one-word stage English-learning children use gestures to express verb-like elements which leaves their words free to express noun-like elements.
Resumo:
Student attitudes towards a subject affect their learning. For students in physics service courses, relevance is emphasised by vocational applications. A similar strategy is being used for students who aspire to continued study of physics, in an introduction to fundamental skills in experimental physics – the concepts, computational tools and practical skills involved in appropriately obtaining and interpreting measurement data. An educational module is being developed that aims to enhance the student experience by embedding learning of these skills in the practicing physicist’s activity of doing an experiment (gravity estimation using a rolling pendulum). The group concentrates on particular skills prompted by challenges such as: • How can we get an answer to our question? • How good is our answer? • How can it be improved? This explicitly provides students the opportunity to consider and construct their own ideas. It gives them time to discuss, digest and practise without undue stress, thereby assisting them to internalise core skills. Design of the learning activity is approached in an iterative manner, via theoretical and practical considerations, with input from a range of teaching staff, and subject to trials of prototypes.
Resumo:
“Closing the gap in curriculum development leadership” is a Carrick-funded University of Queensland project which is designed to address two related gaps in current knowledge and in existing professional development programs for academic staff. The first gap is in our knowledge of curriculum and pedagogical issues as they arise in relation to multi-year sequences of study, such as majors in generalist degrees, or core programs in more structured degrees. While there is considerable knowledge of curriculum and pedagogy at the course or individual unit of study level (e.g. Philosophy I), there is very little properly conceptualised, empirically informed knowledge about student learning (and teaching) over, say, a three-year major sequence in a traditional Arts or Sciences subject. The Carrick-funded project aims to (begin to) fill this gap through bottom-up curriculum development projects across the range of UQ’s offerings. The second gap is in our professional development programs and, indeed, in our recognition and support for the people who are in charge of such multi-year sequences of study. The major convener or program coordinator is not as well supported, in Australian and overseas professional development programs, as the lecturer in charge of a single course (or unit of study). Nor is her work likely to be taken account of in workload calculations or for the purposes of promotion and career advancement more generally. The Carrick-funded project aims to fill this gap by developing, in consultation with crucial stakeholders, amendments to existing university policies and practices. The attached documents provide a useful introduction to the project. For more information, please contact Fred D’Agostino at f.dagostino@uq.edu.au.
Resumo:
Our AUTC Biotechnology study (Phases 1 and 2) identified a range of areas that could benefit from a common approach by universities nationally. A national network of biotechnology educators needs to be solidified through more regular communication, biennial meetings, and development of methods for sharing effective teaching practices and industry placement strategies, for example. Our aims in this proposed study are to: a. Revisit the state of undergraduate biotechnology degree programs nationally to determine their rate of change in content, growth or shrinkage in student numbers (as the biotech industry has had its ups and downs in recent years), and sustainability within their institutions in light of career movements of key personnel, tightening budgets, and governmental funding priorities. b. Explore the feasibility of a range of initiatives to benefit university biotechnology education to determine factors such as how practical each one is, how much buy-in could be gained from potentially participating universities and industry counterparts, and how sustainable such efforts are. One of many such initiatives arising in our AUTC Biotech study was a national register of industry placements for final-year students. c. During scoping and feasibility study, to involve our colleagues who are teaching in biotechnology – and contributing disciplines. Their involvement is meant to yield not only meaningful insight into how to strengthen biotechnology teaching and learning but also to generate ‘buy-in’ on any initiatives that result from this effort.
Resumo:
Except for a few large scale projects, language planners have tended to talk and argue among themselves rather than to see language policy development as an inherently political process. A comparison with a social policy example, taken from the United States, suggests that it is important to understand the problem and to develop solutions in the context of the political process, as this is where decisions will ultimately be made.
Resumo:
NMDA receptors are well known to play an important role in synaptic development and plasticity. Functional NMDA receptors are heteromultimers thought to contain two NR1 subunits and two or three NR2 subunits. In central neurons, NMDA receptors at immature glutamatergic synapses contain NR2B subunits and are largely replaced by NR2A subunits with development. At mature synapses, NMDA receptors are thought to be multimers that contain either NR1/NR2A or NR1/NR2A/NR2B subunits, whereas receptors that contain only NR1/NR2B subunits are extrasynaptic. Here, we have studied the properties of NMDA receptors at glutamatergic synapses in the lateral and central amygdala. We find that NMDA receptor-mediated synaptic currents in the central amygdala in both immature and mature synapses have slow kinetics and are substantially blocked by the NR2B-selective antagonists (1S, 2S)-1-(4-hydroxyphenyl)-2-(4-hydroxy-4-phenylpiperidino)-1-propano and ifenprodil, indicating that there is no developmental change in subunit composition. In contrast, at synapses on pyramidal neurons in the lateral amygdala, whereas NMDA EPSCs at immature synapses are slow and blocked by NR2B-selective antagonists, at mature synapses their kinetics are faster and markedly less sensitive to NR2B-selective antagonists, consistent with a change from NR2B to NR2A subunits. Using real-time PCR and Western blotting, we show that in adults the ratio of levels of NR2B to NR2A subunits is greater in the central amygdala than in the lateral amygdala. These results show that the subunit composition synaptic NMDA receptors in the lateral and central amygdala undergo distinct developmental changes.