797 resultados para Statistical Learning Theory.
Resumo:
A significant number of adults in adult literacy programs in Ontario have specific learning difficulties. This study sought to examine the holistic factors that contributed to these learners achieving their goals. Through a case study design, the data revealed that a combination of specific learning methods and strategies, along with particular characteristics of the instructor, participant, and class, and the evidence of self-transformation all seemed to contribute to the participant's success in the program. Instructor-directed teaching and cooperative learning were the main learning methods used in the class. General learning strategies employed were the use of core curriculum and authentic documents, and using phonics, repetition, assistive resources, and using activities that appealed to various learning styles. The instructor had a history of both professional development in the area of learning disabilities as well as experience working with learners who had specific learning difficulties. There also seemed to be a goodness of fit between the participant and the instructor. Several characteristics of the participant seemed to aid in his success: his positive self-esteem, self-advocacy skills, self-determination, self-awareness, and the fact that he enjoyed learning. The size (3-5 people) and type of class (small group) also seemed to have an impact. Finally, evidence that the participant went through a self-transformation seemed to contribute to a positive learner identity. These results have implications for practice, theory, and further research in adult education.
Resumo:
This paper captured our joint journey to create a living educational theory of knowledge translation (KT). The failure to translate research knowledge to practice is identified as a significant issue in the nursing profession. Our research story takes a critical view of KT related to the philosophical inconsistency between what is espoused in the knowledge related to the discipline of nursing and what is done in practice. Our inquiry revealed “us” as “living contradictions” as our practice was not aligned with our values. In this study, we specifically explored our unique personal KT process in order to understand the many challenges and barriers to KT we encountered in our professional practice as nurse educators. Our unique collaborative action research approach involved cycles of action, reflection, and revision which used our values as standards of judgment in an effort to practice authentically. Our data analysis revealed key elements of collaborative reflective dialogue that evoke multiple ways of knowing, inspire authenticity, and improve learning as the basis of improving practice related to KT. We validated our findings through personal and social validation procedures. Our contribution to a culture of inquiry allowed for co-construction of knowledge to reframe our understanding of KT as a holistic, active process which reflects the essence of who we are and what we do.
Resumo:
As institutions of higher education struggle to stay relevant, competitive, accessible, and flexible, they are scrambling to attend to a shift in focus for new students. This shift involves experiential learning. The purpose of this major research paper was to examine the existing structures, to seek gaps in the experiential learning programs, and to devise a framework to move forward. The specific focus was on experiential learning at Brock University in the Faculty of Applied Health Sciences. The methodology was underscored with cognitive constructivism and appreciative theory. Data collection involved content analysis steps established by Krippendorff (2004) and Weber (1985). Data analysis involved the four dimensions of reflection designed by LaBoskey, including the purpose, context, content, and procedures. The results developed understandings on the state of formal processes and pathways within service learning. A tool kit was generated that defines service learning and offers an overview of the types of service learning typically employed. The tool kit acts as a reference guide for those interested in implementing experiential learning courses. Importantly, the results also provided 10 key points in experiential learning courses by Emily Allan. A flow chart illustrates the connections among each of the 10 points, and then they are described in full to establish a strategy for the way forward in experiential learning.
Resumo:
Employing critical pedagogy and transformative theory as a theoretical framework, I examined a learning process associated with building capacity in community-based organizations (CBOs) through an investigation of the Institutional Capacity Building Program (ICBP) initiated by a Foundation. The study sought to: (a) examine the importance of institutional capacity building for individual and community development; (b) investigate elements of a process associated with a program and characteristics of a learning process for building capacity in CBOs; and (c) analyze the Foundation’s approach to synthesizing, systematizing, and sharing learning. The study used a narrative research design that included 3 one-on-one, hour-long interviews with 2 women having unique vantage points in ICBP: one is a program facilitator working at the Foundation and the other runs a CBO supported by the Foundation. The interviews’ semistructured questions allowed interviewees to share stories regarding their experience with the learning process of ICB and enabled themes to emerge from their day-to-day experience. Through the analysis of this learning process for institutional capacity building, a few lessons can be drawn from the experience of the Foundation.
Resumo:
This qualitative, phenomenological study investigated first generation students’ perceptions of the challenges they experienced in the process of accessing higher education and the type of school-based support that was received. Particular emphasis was placed on the impact of parental education level on access to postsecondary education (PSE) and how differences in support at the primary and secondary levels of schooling influenced access. Purposeful, homogenous sampling was used to select 6 first generation students attending a postsecondary institution located in Ontario. Analysis of the data revealed that several interrelated factors impact first generation students’ access to postsecondary education. These include familial experiences and expectations, school streaming practices, secondary school teachers’ and guidance counselors’ representations of postsecondary education, and the nature of school-based support that participants received. The implications for theory, research, and practice are discussed and recommendations for enhancing school-based support to ensure equitable access to postsecondary education for first generation students are provided.
Resumo:
This study examined the practice and implementation of undergraduate student internships in Ontario, Canada. A literature review revealed that implementation of internships at the undergraduate level in Ontario varies within campuses by faculty and department and also across the university spectrum, partly due to a lack of consistency and structure guiding internship practice in Ontario. Moreover, a lack of general consensus among participating stakeholders concerning the philosophy and approach to internship further complicates and varies its practice. While some departments and universities have started to embrace and implement more experiential learning opportunities into their curriculum, the practice of undergraduate internships is struggling to gain acceptance and validity in others. Using the theory of experiential learning as presented by Dewey (1938) and Kolb (1984) as theoretical frameworks, this research project developed an internship implementation strategy to provide structure and guidance to the practice of internships in Ontario’s undergraduate university curriculum.
Resumo:
This study investigated instructor perceptions of motivators and barriers that exist with respect to participation in educational development in the postsecondary context. Eight instructors from a mid-size, research intensive university in south-western Ontario participated in semistructured interviews to explore this particular issue. Data were analyzed using a qualitative approach. Motivation theory was used as a conceptual framework in this study, referring primarily to the work of Ryan and Deci (2000), Deci and Ryan (1985), and Pink (2009). The identified motivators and barriers spanned all 3 levels of postsecondary institutions: the micro (i.e., the individual), the meso (i.e., the department or Faculty), and the macro (i.e., the institution). Significant motivators to participation in educational development included desire to improve one’s teaching (micro), feedback from students (meso), and tenure and promotion (macro). Significant barriers to participation included lack of time (micro), the perception that an investment towards one’s research was more important than an investment to enhancing teaching (meso), and the impression that quality teaching was not valued by the institution (macro). The study identifies connections between the micro, meso, macro framework and motivation theory, and offers recommendations for practice.
Resumo:
Feature selection plays an important role in knowledge discovery and data mining nowadays. In traditional rough set theory, feature selection using reduct - the minimal discerning set of attributes - is an important area. Nevertheless, the original definition of a reduct is restrictive, so in one of the previous research it was proposed to take into account not only the horizontal reduction of information by feature selection, but also a vertical reduction considering suitable subsets of the original set of objects. Following the work mentioned above, a new approach to generate bireducts using a multi--objective genetic algorithm was proposed. Although the genetic algorithms were used to calculate reduct in some previous works, we did not find any work where genetic algorithms were adopted to calculate bireducts. Compared to the works done before in this area, the proposed method has less randomness in generating bireducts. The genetic algorithm system estimated a quality of each bireduct by values of two objective functions as evolution progresses, so consequently a set of bireducts with optimized values of these objectives was obtained. Different fitness evaluation methods and genetic operators, such as crossover and mutation, were applied and the prediction accuracies were compared. Five datasets were used to test the proposed method and two datasets were used to perform a comparison study. Statistical analysis using the one-way ANOVA test was performed to determine the significant difference between the results. The experiment showed that the proposed method was able to reduce the number of bireducts necessary in order to receive a good prediction accuracy. Also, the influence of different genetic operators and fitness evaluation strategies on the prediction accuracy was analyzed. It was shown that the prediction accuracies of the proposed method are comparable with the best results in machine learning literature, and some of them outperformed it.
Resumo:
This case study traces the evolution of library assignments for biological science students from paper-based workbooks in a blended (hands-on) workshop to blended learning workshops using online assignments to online active learning modules which are stand-alone without any face-to-face instruction. As the assignments evolved to adapt to online learning supporting materials in the form of PDFs (portable document format), screen captures and screencasting were embedded into the questions as teaching moments to replace face-to-face instruction. Many aspects of the evolution of the assignment were based on student feedback from evaluations, input from senior lab demonstrators and teaching assistants, and statistical analysis of the students’ performance on the assignment. Advantages and disadvantages of paper-based and online assignments are discussed. An important factor for successful online learning may be the ability to get assistance.
Resumo:
It is well known that standard asymptotic theory is not valid or is extremely unreliable in models with identification problems or weak instruments [Dufour (1997, Econometrica), Staiger and Stock (1997, Econometrica), Wang and Zivot (1998, Econometrica), Stock and Wright (2000, Econometrica), Dufour and Jasiak (2001, International Economic Review)]. One possible way out consists here in using a variant of the Anderson-Rubin (1949, Ann. Math. Stat.) procedure. The latter, however, allows one to build exact tests and confidence sets only for the full vector of the coefficients of the endogenous explanatory variables in a structural equation, which in general does not allow for individual coefficients. This problem may in principle be overcome by using projection techniques [Dufour (1997, Econometrica), Dufour and Jasiak (2001, International Economic Review)]. AR-types are emphasized because they are robust to both weak instruments and instrument exclusion. However, these techniques can be implemented only by using costly numerical techniques. In this paper, we provide a complete analytic solution to the problem of building projection-based confidence sets from Anderson-Rubin-type confidence sets. The latter involves the geometric properties of “quadrics” and can be viewed as an extension of usual confidence intervals and ellipsoids. Only least squares techniques are required for building the confidence intervals. We also study by simulation how “conservative” projection-based confidence sets are. Finally, we illustrate the methods proposed by applying them to three different examples: the relationship between trade and growth in a cross-section of countries, returns to education, and a study of production functions in the U.S. economy.
Resumo:
We discuss statistical inference problems associated with identification and testability in econometrics, and we emphasize the common nature of the two issues. After reviewing the relevant statistical notions, we consider in turn inference in nonparametric models and recent developments on weakly identified models (or weak instruments). We point out that many hypotheses, for which test procedures are commonly proposed, are not testable at all, while some frequently used econometric methods are fundamentally inappropriate for the models considered. Such situations lead to ill-defined statistical problems and are often associated with a misguided use of asymptotic distributional results. Concerning nonparametric hypotheses, we discuss three basic problems for which such difficulties occur: (1) testing a mean (or a moment) under (too) weak distributional assumptions; (2) inference under heteroskedasticity of unknown form; (3) inference in dynamic models with an unlimited number of parameters. Concerning weakly identified models, we stress that valid inference should be based on proper pivotal functions —a condition not satisfied by standard Wald-type methods based on standard errors — and we discuss recent developments in this field, mainly from the viewpoint of building valid tests and confidence sets. The techniques discussed include alternative proposed statistics, bounds, projection, split-sampling, conditioning, Monte Carlo tests. The possibility of deriving a finite-sample distributional theory, robustness to the presence of weak instruments, and robustness to the specification of a model for endogenous explanatory variables are stressed as important criteria assessing alternative procedures.
Resumo:
This paper presents a new theory of random consumer demand. The primitive is a collection of probability distributions, rather than a binary preference. Various assumptions constrain these distributions, including analogues of common assumptions about preferences such as transitivity, monotonicity and convexity. Two results establish a complete representation of theoretically consistent random demand. The purpose of this theory of random consumer demand is application to empirical consumer demand problems. To this end, the theory has several desirable properties. It is intrinsically stochastic, so the econometrician can apply it directly without adding extrinsic randomness in the form of residuals. Random demand is parsimoniously represented by a single function on the consumption set. Finally, we have a practical method for statistical inference based on the theory, described in McCausland (2004), a companion paper.
Resumo:
McCausland (2004a) describes a new theory of random consumer demand. Theoretically consistent random demand can be represented by a \"regular\" \"L-utility\" function on the consumption set X. The present paper is about Bayesian inference for regular L-utility functions. We express prior and posterior uncertainty in terms of distributions over the indefinite-dimensional parameter set of a flexible functional form. We propose a class of proper priors on the parameter set. The priors are flexible, in the sense that they put positive probability in the neighborhood of any L-utility function that is regular on a large subset bar(X) of X; and regular, in the sense that they assign zero probability to the set of L-utility functions that are irregular on bar(X). We propose methods of Bayesian inference for an environment with indivisible goods, leaving the more difficult case of indefinitely divisible goods for another paper. We analyse individual choice data from a consumer experiment described in Harbaugh et al. (2001).
Resumo:
The attached file is created with Scientific Workplace Latex
Resumo:
L’annotation en rôles sémantiques est une tâche qui permet d’attribuer des étiquettes de rôles telles que Agent, Patient, Instrument, Lieu, Destination etc. aux différents participants actants ou circonstants (arguments ou adjoints) d’une lexie prédicative. Cette tâche nécessite des ressources lexicales riches ou des corpus importants contenant des phrases annotées manuellement par des linguistes sur lesquels peuvent s’appuyer certaines approches d’automatisation (statistiques ou apprentissage machine). Les travaux antérieurs dans ce domaine ont porté essentiellement sur la langue anglaise qui dispose de ressources riches, telles que PropBank, VerbNet et FrameNet, qui ont servi à alimenter les systèmes d’annotation automatisés. L’annotation dans d’autres langues, pour lesquelles on ne dispose pas d’un corpus annoté manuellement, repose souvent sur le FrameNet anglais. Une ressource telle que FrameNet de l’anglais est plus que nécessaire pour les systèmes d’annotation automatisé et l’annotation manuelle de milliers de phrases par des linguistes est une tâche fastidieuse et exigeante en temps. Nous avons proposé dans cette thèse un système automatique pour aider les linguistes dans cette tâche qui pourraient alors se limiter à la validation des annotations proposées par le système. Dans notre travail, nous ne considérons que les verbes qui sont plus susceptibles que les noms d’être accompagnés par des actants réalisés dans les phrases. Ces verbes concernent les termes de spécialité d’informatique et d’Internet (ex. accéder, configurer, naviguer, télécharger) dont la structure actancielle est enrichie manuellement par des rôles sémantiques. La structure actancielle des lexies verbales est décrite selon les principes de la Lexicologie Explicative et Combinatoire, LEC de Mel’čuk et fait appel partiellement (en ce qui concerne les rôles sémantiques) à la notion de Frame Element tel que décrit dans la théorie Frame Semantics (FS) de Fillmore. Ces deux théories ont ceci de commun qu’elles mènent toutes les deux à la construction de dictionnaires différents de ceux issus des approches traditionnelles. Les lexies verbales d’informatique et d’Internet qui ont été annotées manuellement dans plusieurs contextes constituent notre corpus spécialisé. Notre système qui attribue automatiquement des rôles sémantiques aux actants est basé sur des règles ou classificateurs entraînés sur plus de 2300 contextes. Nous sommes limités à une liste de rôles restreinte car certains rôles dans notre corpus n’ont pas assez d’exemples annotés manuellement. Dans notre système, nous n’avons traité que les rôles Patient, Agent et Destination dont le nombre d’exemple est supérieur à 300. Nous avons crée une classe que nous avons nommé Autre où nous avons rassemblé les autres rôles dont le nombre d’exemples annotés est inférieur à 100. Nous avons subdivisé la tâche d’annotation en sous-tâches : identifier les participants actants et circonstants et attribuer des rôles sémantiques uniquement aux actants qui contribuent au sens de la lexie verbale. Nous avons soumis les phrases de notre corpus à l’analyseur syntaxique Syntex afin d’extraire les informations syntaxiques qui décrivent les différents participants d’une lexie verbale dans une phrase. Ces informations ont servi de traits (features) dans notre modèle d’apprentissage. Nous avons proposé deux techniques pour l’identification des participants : une technique à base de règles où nous avons extrait une trentaine de règles et une autre technique basée sur l’apprentissage machine. Ces mêmes techniques ont été utilisées pour la tâche de distinguer les actants des circonstants. Nous avons proposé pour la tâche d’attribuer des rôles sémantiques aux actants, une méthode de partitionnement (clustering) semi supervisé des instances que nous avons comparée à la méthode de classification de rôles sémantiques. Nous avons utilisé CHAMÉLÉON, un algorithme hiérarchique ascendant.