885 resultados para Multiple Instance Dictionary Learning
Resumo:
Students with specific learning disabilities (SLD) typically learn less history content than their peers without disabilities and show fewer learning gains. Even when they are provided with the same instructional strategies, many students with SLD struggle to grasp complex historical concepts and content area vocabulary. Many strategies involving technology have been used in the past to enhance learning for students with SLD in history classrooms. However, very few studies have explored the effectiveness of emerging mobile technology in K-12 history classrooms. This study investigated the effects of mobile devices (iPads) as an active student response (ASR) system on the acquisition of U.S. history content of middle school students with SLD. An alternating treatments single subject design was used to compare the effects of two interventions. There were two conditions and a series of pretest probesin this study. The conditions were: (a) direct instruction and studying from handwritten notes using the interactive notebook strategy and (b) direct instruction and studying using the Quizlet App on the iPad. There were three dependent variables in this study: (a) percent correct on tests, (b) rate of correct responses per minute, and (c) rate of errors per minute. A comparative analysis suggested that both interventions (studying from interactive notes and studying using Quizlet on the iPad) had varying degrees of effectiveness in increasing the learning gains of students with SLD. In most cases, both interventions were equally effective. During both interventions, all of the participants increased their percentage correct and increased their rate of correct responses. Most of the participants decreased their rate of errors. The results of this study suggest that teachers of students with SLD should consider a post lesson review in the form of mobile devices as an ASR system or studying from handwritten notes paired with existing evidence-based practices to facilitate students’ knowledge in U.S. history. Future research should focus on the use of other interactive applications on various mobile operating platforms, on other social studies subjects, and should explore various testing formats such as oral question-answer and multiple choice.
Resumo:
This study investigated the effects of repeated readings on the reading abilities of 4, third-, fourth-, and fifth-grade English language learners (ELLs) with specific learning disabilities (SLD). A multiple baseline probe design across subjects was used to explore the effects of repeated readings on four dependent variables: reading fluency (words read correctly per minute; wpm), number of errors per minute (epm), types of errors per minute, and answer to literal comprehension questions. Data were collected and analyzed during baseline, intervention, generalization probes, and maintenance probes. Throughout the baseline and intervention phases, participants read a passage aloud and received error correction feedback. During baseline, this was followed by fluency and literal comprehension question assessments. During intervention, this was followed by two oral repeated readings of the passage. Then the fluency and literal comprehension question assessments were administered. Generalization probes followed approximately 25% of all sessions and consisted of a single reading of a new passage at the same readability level. Maintenance sessions occurred 2-, 4-, and 6-weeks after the intervention ended. The results of this study indicated that repeated readings had a positive effect on the reading abilities of ELLs with SLD. Participants read more wpm, made fewer epm, and answered more literal comprehension questions correctly. Additionally, on average, generalization scores were higher in intervention than in baseline. Maintenance scores were varied when compared to the last day of intervention, however, with the exception of the number of hesitations committed per minute maintenance scores were higher than baseline means. This study demonstrated that repeated readings improved the reading abilities of ELLs with SLD and that gains were generalized to untaught passages. Maintenance probes 2-, 4-, and 6- weeks following intervention indicated that mean reading fluency, errors per minute, and correct answers to literal comprehensive questions remained above baseline levels. Future research should investigate the use of repeated readings in ELLs with SLD at various stages of reading acquisition. Further, future investigations may examine how repeated readings can be integrated into classroom instruction and assessments.
Resumo:
Many culturally and linguistically diverse (CLD) students with specific learning disabilities (SLD) struggle with the writing process. Particularly, they have difficulties developing and expanding ideas, organizing and elaborating sentences, and revising and editing their compositions (Graham, Harris, & Larsen, 2001; Myles, 2002). Computer graphic organizers offer a possible solution to assist them in their writing. This study investigated the effects of a computer graphic organizer on the persuasive writing compositions of Hispanic middle school students with SLD. A multiple baseline design across subjects was used to examine its effects on six dependent variables: number of arguments and supporting details, number and percentage of transferred arguments and supporting details, planning time, writing fluency, syntactical maturity (measured by T-units, the shortest grammatical sentence without fragments), and overall organization. Data were collected and analyzed throughout baseline and intervention. Participants were taught persuasive writing and the writing process prior to baseline. During baseline, participants were given a prompt and asked to use paper and pencil to plan their compositions. A computer was used for typing and editing. Intervention required participants to use a computer graphic organizer for planning and then a computer for typing and editing. The planning sheets and written composition were printed and analyzed daily along with the time each participant spent on planning. The use of computer graphic organizers had a positive effect on the planning and persuasive writing compositions. Increases were noted in the number of supporting details planned, percentage of supporting details transferred, planning time, writing fluency, syntactical maturity in number of T-units, and overall organization of the composition. Minimal to negligible increases were noted in the mean number of arguments planned and written. Varying effects were noted in the percent of transferred arguments and there was a decrease in the T-unit mean length. This study extends the limited literature on the effects of computer graphic organizers as a prewriting strategy for Hispanic students with SLD. In order to fully gauge the potential of this intervention, future research should investigate the use of different features of computer graphic organizer programs, its effects with other writing genres, and different populations.
Resumo:
Through the creation of this project in English, we have made a file of radiographic images that will be used by third year dental students in order to improve the practical teaching part of the subject of Oral Medicine, essentially by incorporating these files to the Virtual Campus. We have selected the most representative radiopaque radiographic images studied in pathology lectures given. We have prepared a file with 59 radiopaque radiographic images. These lesions have been divided according to their relationship and number with the tooth, into the following groups: “Anatomic radiopacities”, “Periapical radiopacities”, “Solitary radiopacities not necessarily contacting teeth”,“Multiple separate radiopacities”, and “Generalized radiopacities”. We created 4 flowcharts synthesizing the mayor explanatory bases of each pathological process in relation to other pathologies within each location. We have focused primarily in those clinical and radiographic features that can help us differentiate one pathology from another. We believe that by giving the student a knowledge base through each flowchart, as well as provide clinical cases, will start their curiosity to seek new cases on the Internet or try to look for images that we have not been able to locate due to low frequency. In addition, as this project has been done in English, it will provide the students with necessary tools to do a literature search, as most of the medical and dental literature is in English; thus far, providing the student with this material necessary to make the appropriate searched using keywords in English.
Resumo:
This paper investigates the use of web-based textbook supplementary teaching and learning materials which include multiple choice test banks, animated demonstrations, simulations, quizzes and electronic versions of the text. To gauge their experience of the web-based material students were asked to score the main elements of the material in terms of usefulness. In general it was found that while the electronic text provides a flexible platform for presentation of material there is a need for continued monitoring of student use of this material as the literature suggests that digital viewing habits may mean there is little time spent in evaluating information, either for relevance, accuracy or authority. From a lecturer perspective these materials may provide an effective and efficient way of presenting teaching and learning materials to the students in a variety of multimedia formats, but at this stage do not overcome the need for a VLE such as Blackboard™.
Resumo:
Pour être performant au plus haut niveau, les athlètes doivent posséder une capacité perceptivo-cognitive supérieure à la moyenne. Cette faculté, reflétée sur le terrain par la vision et l’intelligence de jeu des sportifs, permet d’extraire l’information clé de la scène visuelle. La science du sport a depuis longtemps observé l’expertise perceptivo-cognitive au sein de l’environnement sportif propre aux athlètes. Récemment, des études ont rapporté que l’expertise pouvait également se refléter hors de ce contexte, lors d’activités du quotidien par exemple. De plus, les récentes théories entourant la capacité plastique du cerveau ont amené les chercheurs à développer des outils pour entraîner les capacités perceptivo-cognitives des athlètes afin de les rendre plus performants sur le terrain. Ces méthodes sont la plupart du temps contextuelles à la discipline visée. Cependant, un nouvel outil d’entraînement perceptivo-cognitif, nommé 3-Dimensional Multiple Object Tracking (3D-MOT) et dénué de contexte sportif, a récemment vu le jour et a fait l’objet de nos recherches. Un de nos objectifs visait à mettre en évidence l’expertise perceptivo-cognitive spécifique et non-spécifique chez des athlètes lors d’une même étude. Nous avons évalué la perception du mouvement biologique chez des joueurs de soccer et des non-athlètes dans une salle de réalité virtuelle. Les sportifs étaient systématiquement plus performants en termes d’efficacité et de temps de réaction que les novices pour discriminer la direction du mouvement biologique lors d’un exercice spécifique de soccer (tir) mais également lors d’une action issue du quotidien (marche). Ces résultats signifient que les athlètes possèdent une meilleure capacité à percevoir les mouvements biologiques humains effectués par les autres. La pratique du soccer semble donc conférer un avantage fondamental qui va au-delà des fonctions spécifiques à la pratique d’un sport. Ces découvertes sont à mettre en parallèle avec la performance exceptionnelle des athlètes dans le traitement de scènes visuelles dynamiques et également dénuées de contexte sportif. Des joueurs de soccer ont surpassé des novices dans le test de 3D-MOT qui consiste à suivre des cibles en mouvement et stimule les capacités perceptivo-cognitives. Leur vitesse de suivi visuel ainsi que leur faculté d’apprentissage étaient supérieures. Ces résultats confirmaient des données obtenues précédemment chez des sportifs. Le 3D-MOT est un test de poursuite attentionnelle qui stimule le traitement actif de l’information visuelle dynamique. En particulier, l’attention sélective, dynamique et soutenue ainsi que la mémoire de travail. Cet outil peut être utilisé pour entraîner les fonctions perceptivo-cognitives des athlètes. Des joueurs de soccer entraînés au 3D-MOT durant 30 sessions ont montré une amélioration de la prise de décision dans les passes de 15% sur le terrain comparés à des joueurs de groupes contrôles. Ces données démontrent pour la première fois un transfert perceptivo-cognitif du laboratoire au terrain suivant un entraînement perceptivo-cognitif non-contextuel au sport de l’athlète ciblé. Nos recherches aident à comprendre l’expertise des athlètes par l’approche spécifique et non-spécifique et présentent également les outils d’entraînements perceptivo-cognitifs, en particulier le 3D-MOT, pour améliorer la performance dans le sport de haut-niveau.
Resumo:
Although people frequently pursue multiple goals simultaneously, these goals often conflict with each other. For instance, consumers may have both a healthy eating goal and a goal to have an enjoyable eating experience. In this dissertation, I focus on two sources of enjoyment in eating experiences that may conflict with healthy eating: consuming tasty food (Essay 1) and affiliating with indulging dining companions (Essay 2). In both essays, I examine solutions and strategies that decrease the conflict between healthy eating and these aspects of enjoyment in the eating experience, thereby enabling consumers to resolve such goal conflicts.
Essay 1 focuses on the well-established conflict between having healthy food and having tasty food and introduces a novel product offering (“vice-virtue bundles”) that can help consumers simultaneously address both health and taste goals. Through several experiments, I demonstrate that consumers often choose vice-virtue bundles with small proportions (¼) of vice and that they view such bundles as healthier than but equally tasty as bundles with larger vice proportions, indicating that “healthier” does not always have to equal “less tasty.”
Essay 2 focuses on a conflict between healthy eating and affiliation with indulging dining companions. The first set of experiments provides evidence of this conflict and examine why it arises (Studies 1 to 3). Based on this conflict’s origins, the second set of experiments tests strategies that consumers can use to decrease the conflict between healthy eating and affiliation with an indulging dining companion (Studies 4 and 5), such that they can make healthy food choices while still being liked by an indulging dining companion. Thus, Essay 2 broadens the existing picture of goals that conflict with the healthy eating goal and, together with Essay 1, identifies solutions to such goal conflicts.
Resumo:
Constant technology advances have caused data explosion in recent years. Accord- ingly modern statistical and machine learning methods must be adapted to deal with complex and heterogeneous data types. This phenomenon is particularly true for an- alyzing biological data. For example DNA sequence data can be viewed as categorical variables with each nucleotide taking four different categories. The gene expression data, depending on the quantitative technology, could be continuous numbers or counts. With the advancement of high-throughput technology, the abundance of such data becomes unprecedentedly rich. Therefore efficient statistical approaches are crucial in this big data era.
Previous statistical methods for big data often aim to find low dimensional struc- tures in the observed data. For example in a factor analysis model a latent Gaussian distributed multivariate vector is assumed. With this assumption a factor model produces a low rank estimation of the covariance of the observed variables. Another example is the latent Dirichlet allocation model for documents. The mixture pro- portions of topics, represented by a Dirichlet distributed variable, is assumed. This dissertation proposes several novel extensions to the previous statistical methods that are developed to address challenges in big data. Those novel methods are applied in multiple real world applications including construction of condition specific gene co-expression networks, estimating shared topics among newsgroups, analysis of pro- moter sequences, analysis of political-economics risk data and estimating population structure from genotype data.
Resumo:
Subspaces and manifolds are two powerful models for high dimensional signals. Subspaces model linear correlation and are a good fit to signals generated by physical systems, such as frontal images of human faces and multiple sources impinging at an antenna array. Manifolds model sources that are not linearly correlated, but where signals are determined by a small number of parameters. Examples are images of human faces under different poses or expressions, and handwritten digits with varying styles. However, there will always be some degree of model mismatch between the subspace or manifold model and the true statistics of the source. This dissertation exploits subspace and manifold models as prior information in various signal processing and machine learning tasks.
A near-low-rank Gaussian mixture model measures proximity to a union of linear or affine subspaces. This simple model can effectively capture the signal distribution when each class is near a subspace. This dissertation studies how the pairwise geometry between these subspaces affects classification performance. When model mismatch is vanishingly small, the probability of misclassification is determined by the product of the sines of the principal angles between subspaces. When the model mismatch is more significant, the probability of misclassification is determined by the sum of the squares of the sines of the principal angles. Reliability of classification is derived in terms of the distribution of signal energy across principal vectors. Larger principal angles lead to smaller classification error, motivating a linear transform that optimizes principal angles. This linear transformation, termed TRAIT, also preserves some specific features in each class, being complementary to a recently developed Low Rank Transform (LRT). Moreover, when the model mismatch is more significant, TRAIT shows superior performance compared to LRT.
The manifold model enforces a constraint on the freedom of data variation. Learning features that are robust to data variation is very important, especially when the size of the training set is small. A learning machine with large numbers of parameters, e.g., deep neural network, can well describe a very complicated data distribution. However, it is also more likely to be sensitive to small perturbations of the data, and to suffer from suffer from degraded performance when generalizing to unseen (test) data.
From the perspective of complexity of function classes, such a learning machine has a huge capacity (complexity), which tends to overfit. The manifold model provides us with a way of regularizing the learning machine, so as to reduce the generalization error, therefore mitigate overfiting. Two different overfiting-preventing approaches are proposed, one from the perspective of data variation, the other from capacity/complexity control. In the first approach, the learning machine is encouraged to make decisions that vary smoothly for data points in local neighborhoods on the manifold. In the second approach, a graph adjacency matrix is derived for the manifold, and the learned features are encouraged to be aligned with the principal components of this adjacency matrix. Experimental results on benchmark datasets are demonstrated, showing an obvious advantage of the proposed approaches when the training set is small.
Stochastic optimization makes it possible to track a slowly varying subspace underlying streaming data. By approximating local neighborhoods using affine subspaces, a slowly varying manifold can be efficiently tracked as well, even with corrupted and noisy data. The more the local neighborhoods, the better the approximation, but the higher the computational complexity. A multiscale approximation scheme is proposed, where the local approximating subspaces are organized in a tree structure. Splitting and merging of the tree nodes then allows efficient control of the number of neighbourhoods. Deviation (of each datum) from the learned model is estimated, yielding a series of statistics for anomaly detection. This framework extends the classical {\em changepoint detection} technique, which only works for one dimensional signals. Simulations and experiments highlight the robustness and efficacy of the proposed approach in detecting an abrupt change in an otherwise slowly varying low-dimensional manifold.
Resumo:
This paper is a case study that describes the design and delivery of national PhD lectures with 40 PhD candidates in Digital Arts and Humanities in Ireland simultaneously to four remote locations, in Trinity College Dublin, in University College Cork, in NUI Maynooth and NUI Galway. Blended learning approaches were utilized to augment traditional teaching practices combining: face-to-face engagement, video-conferencing to multiple sites, social media lecture delivery support – a live blog and micro blogging, shared, open student web presence online. Techniques for creating an effective, active learning environment were discerned via a range of learning options offered to students through student surveys after semester one. Students rejected the traditional lecture format, even through the novel delivery method via video link to a number of national academic institutions was employed. Students also rejected the use of a moderated forum as a means of creating engagement across the various institutions involved. Students preferred a mix of approaches for this online national engagement. The paper discusses successful methods used to promote interactive teaching and learning. These included Peer to peer learning, Workshop style delivery, Social media. The lecture became a national, synchronous workshop. The paper describes how allowing students to have a voice in the virtual classroom they become animated and engaged in an open culture of shared experience and scholarship, create networks beyond their institutions, and across disciplinary boundaries. We offer an analysis of our experiences to assist other educators in their course design, with a particular emphasis on social media engagement.
Resumo:
This work explores the use of statistical methods in describing and estimating camera poses, as well as the information feedback loop between camera pose and object detection. Surging development in robotics and computer vision has pushed the need for algorithms that infer, understand, and utilize information about the position and orientation of the sensor platforms when observing and/or interacting with their environment.
The first contribution of this thesis is the development of a set of statistical tools for representing and estimating the uncertainty in object poses. A distribution for representing the joint uncertainty over multiple object positions and orientations is described, called the mirrored normal-Bingham distribution. This distribution generalizes both the normal distribution in Euclidean space, and the Bingham distribution on the unit hypersphere. It is shown to inherit many of the convenient properties of these special cases: it is the maximum-entropy distribution with fixed second moment, and there is a generalized Laplace approximation whose result is the mirrored normal-Bingham distribution. This distribution and approximation method are demonstrated by deriving the analytical approximation to the wrapped-normal distribution. Further, it is shown how these tools can be used to represent the uncertainty in the result of a bundle adjustment problem.
Another application of these methods is illustrated as part of a novel camera pose estimation algorithm based on object detections. The autocalibration task is formulated as a bundle adjustment problem using prior distributions over the 3D points to enforce the objects' structure and their relationship with the scene geometry. This framework is very flexible and enables the use of off-the-shelf computational tools to solve specialized autocalibration problems. Its performance is evaluated using a pedestrian detector to provide head and foot location observations, and it proves much faster and potentially more accurate than existing methods.
Finally, the information feedback loop between object detection and camera pose estimation is closed by utilizing camera pose information to improve object detection in scenarios with significant perspective warping. Methods are presented that allow the inverse perspective mapping traditionally applied to images to be applied instead to features computed from those images. For the special case of HOG-like features, which are used by many modern object detection systems, these methods are shown to provide substantial performance benefits over unadapted detectors while achieving real-time frame rates, orders of magnitude faster than comparable image warping methods.
The statistical tools and algorithms presented here are especially promising for mobile cameras, providing the ability to autocalibrate and adapt to the camera pose in real time. In addition, these methods have wide-ranging potential applications in diverse areas of computer vision, robotics, and imaging.
Resumo:
This dissertation contributes to the rapidly growing empirical research area in the field of operations management. It contains two essays, tackling two different sets of operations management questions which are motivated by and built on field data sets from two very different industries --- air cargo logistics and retailing.
The first essay, based on the data set obtained from a world leading third-party logistics company, develops a novel and general Bayesian hierarchical learning framework for estimating customers' spillover learning, that is, customers' learning about the quality of a service (or product) from their previous experiences with similar yet not identical services. We then apply our model to the data set to study how customers' experiences from shipping on a particular route affect their future decisions about shipping not only on that route, but also on other routes serviced by the same logistics company. We find that customers indeed borrow experiences from similar but different services to update their quality beliefs that determine future purchase decisions. Also, service quality beliefs have a significant impact on their future purchasing decisions. Moreover, customers are risk averse; they are averse to not only experience variability but also belief uncertainty (i.e., customer's uncertainty about their beliefs). Finally, belief uncertainty affects customers' utilities more compared to experience variability.
The second essay is based on a data set obtained from a large Chinese supermarket chain, which contains sales as well as both wholesale and retail prices of un-packaged perishable vegetables. Recognizing the special characteristics of this particularly product category, we develop a structural estimation model in a discrete-continuous choice model framework. Building on this framework, we then study an optimization model for joint pricing and inventory management strategies of multiple products, which aims at improving the company's profit from direct sales and at the same time reducing food waste and thus improving social welfare.
Collectively, the studies in this dissertation provide useful modeling ideas, decision tools, insights, and guidance for firms to utilize vast sales and operations data to devise more effective business strategies.
Resumo:
Bayesian methods offer a flexible and convenient probabilistic learning framework to extract interpretable knowledge from complex and structured data. Such methods can characterize dependencies among multiple levels of hidden variables and share statistical strength across heterogeneous sources. In the first part of this dissertation, we develop two dependent variational inference methods for full posterior approximation in non-conjugate Bayesian models through hierarchical mixture- and copula-based variational proposals, respectively. The proposed methods move beyond the widely used factorized approximation to the posterior and provide generic applicability to a broad class of probabilistic models with minimal model-specific derivations. In the second part of this dissertation, we design probabilistic graphical models to accommodate multimodal data, describe dynamical behaviors and account for task heterogeneity. In particular, the sparse latent factor model is able to reveal common low-dimensional structures from high-dimensional data. We demonstrate the effectiveness of the proposed statistical learning methods on both synthetic and real-world data.
Resumo:
Bioscience subjects require a significant amount of training in laboratory techniques to produce highly skilled science graduates. Many techniques which are currently used in diagnostic, research and industrial laboratories require expensive equipment for single users; examples of which include next generation sequencing, quantitative PCR, mass spectrometry and other analytical techniques. The cost of the machines, reagents and limited access frequently preclude undergraduate students from using such cutting edge techniques. In addition to cost and availability, the time taken for analytical runs on equipment such as High Performance Liquid Chromatography (HPLC) does not necessarily fit with the limitations of timetabling. Understanding the theory underlying these techniques without the accompanying practical classes can be unexciting for students. One alternative from wet laboratory provision is to use virtual simulations of such practical which enable students to see the machines and interact with them to generate data. The Faculty of Science and Technology at the University of Westminster has provided all second and third year undergraduate students with iPads so that these students all have access to a mobile device to assist with learning. We have purchased licences from Labster to access a range of virtual laboratory simulations. These virtual laboratories are fully equipped and require student responses to multiple answer questions in order to progress through the experiment. In a pilot study to look at the feasibility of the Labster virtual laboratory simulations with the iPad devices; second year Biological Science students (n=36) worked through the Labster HPLC simulation on iPads. The virtual HPLC simulation enabled students to optimise the conditions for the separation of drugs. Answers to Multiple choice questions were necessary to progress through the simulation, these focussed on the underlying principles of the HPLC technique. Following the virtual laboratory simulation students went to a real HPLC in the analytical suite in order to separate of asprin, caffeine and paracetamol. In a survey 100% of students (n=36) in this cohort agreed that the Labster virtual simulation had helped them to understand HPLC. In free text responses one student commented that "The terminology is very clear and I enjoyed using Labster very much”. One member of staff commented that “there was a very good knowledge interaction with the virtual practical”.
Resumo:
The last couple of years there has been a lot of attention for MOOCs. More and more universities start offering MOOCs. Although the open dimension of MOOC indicates that it is open in every aspect, in most cases it is a course with a structure and a timeline within which learning activities are positioned. There is a contradiction there. The open aspect puts MOOCs more in the non-formal professional learning domain, while the course structure takes it into the formal, traditional education domain. Accordingly, there is no consensus yet on solid pedagogical approaches for MOOCs. Something similar can be said for learning analytics, another upcoming concept that is receiving a lot of attention. Given its nature, learning analytics offers a large potential to support learners in particular in MOOCs. Learning analytics should then be applied to assist the learners and teachers in understanding the learning process and could predict learning, provide opportunities for pro-active feedback, but should also results in interventions aimed at improving progress. This paper illustrates pedagogical and learning analytics approaches based on practices developed in formal online and distance teaching university education that have been fine-tuned for MOOCs and have been piloted in the context of the EU-funded MOOC projects ECO (Elearning, Communication, Open-Data: http://ecolearning.eu) and EMMA (European Multiple MOOC Aggregator: http://platform.europeanmoocs.eu).