685 resultados para Computer Based Learning System
Resumo:
There is a growing societal need to address the increasing prevalence of behavioral health issues, such as obesity, alcohol or drug use, and general lack of treatment adherence for a variety of health problems. The statistics, worldwide and in the USA, are daunting. Excessive alcohol use is the third leading preventable cause of death in the United States (with 79,000 deaths annually), and is responsible for a wide range of health and social problems. On the positive side though, these behavioral health issues (and associated possible diseases) can often be prevented with relatively simple lifestyle changes, such as losing weight with a diet and/or physical exercise, or learning how to reduce alcohol consumption. Medicine has therefore started to move toward finding ways of preventively promoting wellness, rather than solely treating already established illness.^ Evidence-based patient-centered Brief Motivational Interviewing (BMI) interventions have been found particularly effective in helping people find intrinsic motivation to change problem behaviors after short counseling sessions, and to maintain healthy lifestyles over the long-term. Lack of locally available personnel well-trained in BMI, however, often limits access to successful interventions for people in need. To fill this accessibility gap, Computer-Based Interventions (CBIs) have started to emerge. Success of the CBIs, however, critically relies on insuring engagement and retention of CBI users so that they remain motivated to use these systems and come back to use them over the long term as necessary.^ Because of their text-only interfaces, current CBIs can therefore only express limited empathy and rapport, which are the most important factors of health interventions. Fortunately, in the last decade, computer science research has progressed in the design of simulated human characters with anthropomorphic communicative abilities. Virtual characters interact using humans’ innate communication modalities, such as facial expressions, body language, speech, and natural language understanding. By advancing research in Artificial Intelligence (AI), we can improve the ability of artificial agents to help us solve CBI problems.^ To facilitate successful communication and social interaction between artificial agents and human partners, it is essential that aspects of human social behavior, especially empathy and rapport, be considered when designing human-computer interfaces. Hence, the goal of the present dissertation is to provide a computational model of rapport to enhance an artificial agent’s social behavior, and to provide an experimental tool for the psychological theories shaping the model. Parts of this thesis were already published in [LYL+12, AYL12, AL13, ALYR13, LAYR13, YALR13, ALY14].^
Resumo:
The first report commissioned by Ufi Charitable Trust. It investigates opportunities for and barriers to the application of digital technology to adult learning. It focuses on possible ways to transform the UK’s vocational education and training system, identifying three main priorities for funding by the Ufi Charitable Trust: * increasing the capability of those involved in running the vocational learning system * exploiting networks to bring together learners, learning content and learning professionals * harnessing computers to support individualised and differentiated learning.
Resumo:
Deep learning methods are extremely promising machine learning tools to analyze neuroimaging data. However, their potential use in clinical settings is limited because of the existing challenges of applying these methods to neuroimaging data. In this study, first a data leakage type caused by slice-level data split that is introduced during training and validation of a 2D CNN is surveyed and a quantitative assessment of the model’s performance overestimation is presented. Second, an interpretable, leakage-fee deep learning software written in a python language with a wide range of options has been developed to conduct both classification and regression analysis. The software was applied to the study of mild cognitive impairment (MCI) in patients with small vessel disease (SVD) using multi-parametric MRI data where the cognitive performance of 58 patients measured by five neuropsychological tests is predicted using a multi-input CNN model taking brain image and demographic data. Each of the cognitive test scores was predicted using different MRI-derived features. As MCI due to SVD has been hypothesized to be the effect of white matter damage, DTI-derived features MD and FA produced the best prediction outcome of the TMT-A score which is consistent with the existing literature. In a second study, an interpretable deep learning system aimed at 1) classifying Alzheimer disease and healthy subjects 2) examining the neural correlates of the disease that causes a cognitive decline in AD patients using CNN visualization tools and 3) highlighting the potential of interpretability techniques to capture a biased deep learning model is developed. Structural magnetic resonance imaging (MRI) data of 200 subjects was used by the proposed CNN model which was trained using a transfer learning-based approach producing a balanced accuracy of 71.6%. Brain regions in the frontal and parietal lobe showing the cerebral cortex atrophy were highlighted by the visualization tools.
Resumo:
Recent technological advancements have played a key role in seamlessly integrating cloud, edge, and Internet of Things (IoT) technologies, giving rise to the Cloud-to-Thing Continuum paradigm. This cloud model connects many heterogeneous resources that generate a large amount of data and collaborate to deliver next-generation services. While it has the potential to reshape several application domains, the number of connected entities remarkably broadens the security attack surface. One of the main problems is the lack of security measures to adapt to the dynamic and evolving conditions of the Cloud-To-Thing Continuum. To address this challenge, this dissertation proposes novel adaptable security mechanisms. Adaptable security is the capability of security controls, systems, and protocols to dynamically adjust to changing conditions and scenarios. However, since the design and development of novel security mechanisms can be explored from different perspectives and levels, we place our attention on threat modeling and access control. The contributions of the thesis can be summarized as follows. First, we introduce a model-based methodology that secures the design of edge and cyber-physical systems. This solution identifies threats, security controls, and moving target defense techniques based on system features. Then, we focus on access control management. Since access control policies are subject to modifications, we evaluate how they can be efficiently shared among distributed areas, highlighting the effectiveness of distributed ledger technologies. Furthermore, we propose a risk-based authorization middleware, adjusting permissions based on real-time data, and a federated learning framework that enhances trustworthiness by weighting each client's contributions according to the quality of their partial models. Finally, since authorization revocation is another critical concern, we present an efficient revocation scheme for verifiable credentials in IoT networks, featuring decentralization, demanding minimum storage and computing capabilities. All the mechanisms have been evaluated in different conditions, proving their adaptability to the Cloud-to-Thing Continuum landscape.
Resumo:
Recent experiments have revealed the fundamental importance of neuromodulatory action on activity-dependent synaptic plasticity underlying behavioral learning and spatial memory formation. Neuromodulators affect synaptic plasticity through the modification of the dynamics of receptors on the synaptic membrane. However, chemical substances other than neuromodulators, such as receptors co-agonists, can influence the receptors' dynamics and thus participate in determining plasticity. Here we focus on D-serine, which has been observed to affect the activity thresholds of synaptic plasticity by co-activating NMDA receptors. We use a computational model for spatial value learning with plasticity between two place cell layers. The D-serine release is CB1R mediated and the model reproduces the impairment of spatial memory due to the astrocytic CB1R knockout for a mouse navigating in the Morris water maze. The addition of path-constraining obstacles shows how performance impairment depends on the environment's topology. The model can explain the experimental evidence and produce useful testable predictions to increase our understanding of the complex mechanisms underlying learning.
Resumo:
Objective: We carry out a systematic assessment on a suite of kernel-based learning machines while coping with the task of epilepsy diagnosis through automatic electroencephalogram (EEG) signal classification. Methods and materials: The kernel machines investigated include the standard support vector machine (SVM), the least squares SVM, the Lagrangian SVM, the smooth SVM, the proximal SVM, and the relevance vector machine. An extensive series of experiments was conducted on publicly available data, whose clinical EEG recordings were obtained from five normal subjects and five epileptic patients. The performance levels delivered by the different kernel machines are contrasted in terms of the criteria of predictive accuracy, sensitivity to the kernel function/parameter value, and sensitivity to the type of features extracted from the signal. For this purpose, 26 values for the kernel parameter (radius) of two well-known kernel functions (namely. Gaussian and exponential radial basis functions) were considered as well as 21 types of features extracted from the EEG signal, including statistical values derived from the discrete wavelet transform, Lyapunov exponents, and combinations thereof. Results: We first quantitatively assess the impact of the choice of the wavelet basis on the quality of the features extracted. Four wavelet basis functions were considered in this study. Then, we provide the average accuracy (i.e., cross-validation error) values delivered by 252 kernel machine configurations; in particular, 40%/35% of the best-calibrated models of the standard and least squares SVMs reached 100% accuracy rate for the two kernel functions considered. Moreover, we show the sensitivity profiles exhibited by a large sample of the configurations whereby one can visually inspect their levels of sensitiveness to the type of feature and to the kernel function/parameter value. Conclusions: Overall, the results evidence that all kernel machines are competitive in terms of accuracy, with the standard and least squares SVMs prevailing more consistently. Moreover, the choice of the kernel function and parameter value as well as the choice of the feature extractor are critical decisions to be taken, albeit the choice of the wavelet family seems not to be so relevant. Also, the statistical values calculated over the Lyapunov exponents were good sources of signal representation, but not as informative as their wavelet counterparts. Finally, a typical sensitivity profile has emerged among all types of machines, involving some regions of stability separated by zones of sharp variation, with some kernel parameter values clearly associated with better accuracy rates (zones of optimality). (C) 2011 Elsevier B.V. All rights reserved.
Resumo:
A new digital computer mock circulatory system has been developed in order to replicate the physiologic and pathophysiologic characteristics of the human cardiovascular system. The computer performs the acquisition of pressure, flow, and temperature in an open loop system. A computer program has been developed in Labview programing environment to evaluate all these physical parameters. The acquisition system was composed of pressure, flow, and temperature sensors and also signal conditioning modules. In this study, some results of flow, cardiac frequencies, pressures, and temperature were evaluated according to physiologic ventricular states. The results were compared with literature data. In further works, performance investigations will be conducted on a ventricular assist device and endoprosthesis. Also, this device should allow for evaluation of several kinds of vascular diseases.
Resumo:
Conferences that deliver interactive sessions designed to enhance physician participation, such as role play, small discussion groups, workshops, hands-on training, problem- or case-based learning and individualised training sessions, are effective for physician education.
Resumo:
This study describes a coding system developed to operationalize the sociolinguistic strategies proposed by communication accommodation theory (CAT) in an academic context. Fifty interactions between two students (of Australian or Chinese ethnic background) or a student and faculty member were videotaped. A turn- and episode-based coding system was developed, focusing on verbal and nonverbal behavior. The development of this system is described in detail, before results are presented. Results indicated that status was the main influence on choice of strategies, particularly the extent and type of discourse management and interpersonal control. Participants' sew and ethnicity also played a role: Male participants made more use of interpretability (largely questions), whereas female participants used discourse management to develop a shared perspective. The results make clear that there is no automatic correspondence between behaviors and the strategies they constitute, and they point to the appropriateness of conceptualizing behavior and strategies separately in CAT.
Resumo:
A course which has a large student enrolment consequently puts a heavy load on instructors both in the presentation and the assessment areas. In the School of Economics at the University of Queensland, this is the case for the quantitative analysis subjects. Assessment for many years has been through mid-semester and end of semester exams, as well as Computer Managed Learning (CML) assignments. In 2000 it was decided to incorporate a system of flexible assessment where neither the CML nor the mid-semester exam was compulsory. The outcomes are assessed and the advantages and disadvantages discussed.
Resumo:
The personal computer revolution has resulted in the widespread availability of low-cost image analysis hardware. At the same time, new graphic file formats have made it possible to handle and display images at resolutions beyond the capability of the human eye. Consequently, there has been a significant research effort in recent years aimed at making use of these hardware and software technologies for flotation plant monitoring. Computer-based vision technology is now moving out of the research laboratory and into the plant to become a useful means of monitoring and controlling flotation performance at the cell level. This paper discusses the metallurgical parameters that influence surface froth appearance and examines the progress that has been made in image analysis of flotation froths. The texture spectrum and pixel tracing techniques developed at the Julius Kruttschnitt Mineral Research Centre are described in detail. The commercial implementation, JKFrothCam, is one of a number of froth image analysis systems now reaching maturity. In plants where it is installed, JKFrothCam has shown a number of performance benefits. Flotation runs more consistently, meeting product specifications while maintaining high recoveries. The system has also shown secondary benefits in that reagent costs have been significantly reduced as a result of improved flotation control. (C) 2002 Elsevier Science B.V. All rights reserved.
Resumo:
Incursions of Japanese encephalitis (JE) virus into northern Queensland are currently monitored using sentinel pigs. However, the maintenance of these pigs is expensive, and because pigs are the major amplifying hosts of the virus, they may contribute to JE transmission. Therefore, we evaluated a mosquito-based detection system to potentially replace the sentinel pigs. Single, inactivated JE-infected Culex annulirostris Skuse and C. sitiens Wiedemann were placed into pools of uninfected mosquitoes that were housed in a Mosquito Magnet Pro (MM) trap set under wet season field conditions in Cairns, Queensland for 0, 7, or 14 d. JE viral RNA was detected (cycling threshold [CT] = 40) in 11/ 12, 10/14, and 2/5 pools containing 200, 1,000, and 5,000 mosquitoes, respectively, using a TaqMan real-time reverse transcription-polymerase chain reaction (RT-PCR). The ability to detect virus was not affected by the length of time pools were maintained under field conditions, although the CT score tended to increase with field exposure time. Furthermore, JE viral RNA was detected in three pools of 1,000 mosquitoes collected from Badu Island using a MM trap. These results indicated that a mosquito trap system employing self-powered traps, such as the MosquitoMagnet, and a real-time PCR system, could be used to monitor for JE in remote areas.
Resumo:
A questionnaire on lectures was completed by 351 students (84% response) and 35 staff (76% response) from all five years of the veterinary course at the University of Queensland. Staff and students in all five years offered limited support for a reduction in the number of lectures in the course and the majority supported a reduction in the number of lectures in the clinical years. Students in the clinical years only and appropriate staff agreed that the number of lectures in fifth year should be reduced but were divided as to whether lectures in fifth year should be abolished. There was limited support for replacement of some lectures by computer assisted learning (CAL) programs, but strong support for replacement of some lectures by subject-based problem based learning (PBL) and strong support for more self-directed learning by students. Staff and students strongly supported the inclusion of more clinical problem solving in lectures in the clinical years and wanted these lectures to be more interactive. There was little support for lectures in the clinical years to be of the same type as in the preclinical years.
Resumo:
Purpose: Precise needle puncture of the renal collecting system is an essential but challenging step for successful percutaneous nephrolithotomy. We evaluated the efficiency of a new real-time electromagnetic tracking system for in vivo kidney puncture. Materials and Methods: Six anesthetized female pigs underwent ureterorenoscopy to place a catheter with an electromagnetic tracking sensor into the desired puncture site and ascertain puncture success. A tracked needle with a similar electromagnetic tracking sensor was subsequently navigated into the sensor in the catheter. Four punctures were performed by each of 2 surgeons in each pig, including 1 each in the kidney, middle ureter, and right and left sides. Outcome measurements were the number of attempts and the time needed to evaluate the virtual trajectory and perform percutaneous puncture. Results: A total of 24 punctures were easily performed without complication. Surgeons required more time to evaluate the trajectory during ureteral than kidney puncture (median 15 seconds, range 14 to 18 vs 13, range 11 to 16, p ¼ 0.1). Median renal and ureteral puncture time was 19 (range 14 to 45) and 51 seconds (range 45 to 67), respectively (p ¼ 0.003). Two attempts were needed to achieve a successful ureteral puncture. The technique requires the presence of a renal stone for testing. Conclusions: The proposed electromagnetic tracking solution for renal collecting system puncture proved to be highly accurate, simple and quick. This method might represent a paradigm shift in percutaneous kidney access techniques.
Resumo:
Mestrado em Radiações Aplicadas às Tecnologias da Saúde. Área de especialização: Protecção contra Radiações