881 resultados para Model transformation learning
Resumo:
The weaknesses of ‗traditional‘ modes of instruction in accounting education have been widely discussed. Many contend that the traditional approach limits the ability to provide opportunities for students to raise their competency level and allow them to apply knowledge and skills in professional problem solving situations. However, the recent body of literature suggests that accounting educators are indeed actively experimenting with ‗non-traditional‘ and ‗innovative‘ instructional approaches, where some authors clearly favour one approach over another. But can one instructional approach alone meet the necessary conditions for different learning objectives? Taking into account the ever changing landscape of not only business environments, but also the higher education sector, the premise guiding the collaborators in this research is that it is perhaps counter productive to promote competing dichotomous views of ‗traditional‘ and ‗non-traditional‘ instructional approaches to accounting education, and that the notion of ‗blended learning‘ might provide a useful framework to enhance the learning and teaching of accounting. This paper reports on the first cycle of a longitudinal study, which explores the possibility of using blended learning in first year accounting at one campus of a large regional university. The critical elements of blended learning which emerged in the study are discussed and, consistent with the design-based research framework, the paper also identifies key design modifications for successive cycles of the research.
Resumo:
This paper deals with the problem of using the data mining models in a real-world situation where the user can not provide all the inputs with which the predictive model is built. A learning system framework, Query Based Learning System (QBLS), is developed for improving the performance of the predictive models in practice where not all inputs are available for querying to the system. The automatic feature selection algorithm called Query Based Feature Selection (QBFS) is developed for selecting features to obtain a balance between the relative minimum subset of features and the relative maximum classification accuracy. Performance of the QBLS system and the QBFS algorithm is successfully demonstrated with a real-world application
Resumo:
This paper will report on the evaluation of a new undergraduate legal workplace unit, LWB421 Learning in Professional Practice. LWB421 was developed in response to the QUT’s strategic planning and a growing view that work experience is essential to developing the skills that law graduates need in order to be effective legal practitioners (Stuckey, 2007). Work integrated learning provides a context for students to develop their skills, to see the link between theory and practice and support students in making the transition from university to practice (Shirley, 2006). The literature in Australian legal education has given little consideration to the design of legal internship subjects (as distinct from legal clinic programs). Accordingly the design of placement subjects needs to be carefully considered to ensure alignment of learning objectives, learning tasks and assessment. Legal placements offer students the opportunity to develop their professional skills in practice, reflect on their own learning and job performance and take responsibility for their career development and planning. This paper will examine the literature relating to the design of placement subjects, particularly in a legal context. It will propose a collaborative model to facilitate learning and assessment of legal work placement subjects. The basis of the model is a negotiated learning contract between the student, workplace supervisor and academic supervisor. Finally the paper will evaluate the model in the context of LWB421. The evaluation will be based on data from surveys of students and supervisors and focus group sessions.
Resumo:
In this study, we investigate the qualitative and quantitative effects of an R&D subsidy for a clean technology and a Pigouvian tax on a dirty technology on environmental R&D when it is uncertain how long the research takes to complete. The model is formulated as an optimal stopping problem, in which the number of successes required to complete the R&D project is finite and learning about the probability of success is incorporated. We show that the optimal R&D subsidy with the consideration of learning is higher than that without it. We also find that an R&D subsidy performs better than a Pigouvian tax unless suppliers have sufficient incentives to continue cost-reduction efforts after the new technology success-fully replaces the old one. Moreover, by using a two-project model, we show that a uniform subsidy is better than a selective subsidy.
Resumo:
Cross domain and cross-modal matching has many applications in the field of computer vision and pattern recognition. A few examples are heterogeneous face recognition, cross view action recognition, etc. This is a very challenging task since the data in two domains can differ significantly. In this work, we propose a coupled dictionary and transformation learning approach that models the relationship between the data in both domains. The approach learns a pair of transformation matrices that map the data in the two domains in such a manner that they share common sparse representations with respect to their own dictionaries in the transformed space. The dictionaries for the two domains are learnt in a coupled manner with an additional discriminative term to ensure improved recognition performance. The dictionaries and the transformation matrices are jointly updated in an iterative manner. The applicability of the proposed approach is illustrated by evaluating its performance on different challenging tasks: face recognition across pose, illumination and resolution, heterogeneous face recognition and cross view action recognition. Extensive experiments on five datasets namely, CMU-PIE, Multi-PIE, ChokePoint, HFB and IXMAS datasets and comparisons with several state-of-the-art approaches show the effectiveness of the proposed approach. (C) 2015 Elsevier B.V. All rights reserved.
Resumo:
Cross domain and cross-modal matching has many applications in the field of computer vision and pattern recognition. A few examples are heterogeneous face recognition, cross view action recognition, etc. This is a very challenging task since the data in two domains can differ significantly. In this work, we propose a coupled dictionary and transformation learning approach that models the relationship between the data in both domains. The approach learns a pair of transformation matrices that map the data in the two domains in such a manner that they share common sparse representations with respect to their own dictionaries in the transformed space. The dictionaries for the two domains are learnt in a coupled manner with an additional discriminative term to ensure improved recognition performance. The dictionaries and the transformation matrices are jointly updated in an iterative manner. The applicability of the proposed approach is illustrated by evaluating its performance on different challenging tasks: face recognition across pose, illumination and resolution, heterogeneous face recognition and cross view action recognition. Extensive experiments on five datasets namely, CMU-PIE, Multi-PIE, ChokePoint, HFB and IXMAS datasets and comparisons with several state-of-the-art approaches show the effectiveness of the proposed approach. (C) 2015 Elsevier B.V. All rights reserved.
Resumo:
Although blogs exist from the beginning of the Internet, their use has considerablybeen increased in the last decade. Nowadays, they are ready for being used bya broad range of people. From teenagers to multinationals, everyone can have aglobal communication space.Companies know blogs are a valuable publicity tool to share information withthe participants, and the importance of creating consumer communities aroundthem: participants come together to exchange ideas, review and recommend newproducts, and even support each other. Also, companies can use blogs for differentpurposes, such as a content management system to manage the content of websites,a bulletin board to support communication and document sharing in teams,an instrument in marketing to communicate with Internet users, or a KnowledgeManagement Tool. However, an increasing number of blog content do not findtheir source in the personal experiences of the writer. Thus, the information cancurrently be kept in the user¿s desktop documents, in the companies¿ catalogues,or in another blogs. Although the gap between blog and data source can be manuallytraversed in a manual coding, this is a cumbersome task that defeats the blog¿seasiness principle. Moreover, depending on the quantity of information and itscharacterisation (i.e., structured content, unstructured content, etc.), an automaticapproach can be more effective.Based on these observations, the aim of this dissertation is to assist blog publicationthrough annotation, model transformation and crossblogging techniques.These techniques have been implemented to give rise to Blogouse, Catablog, andBlogUnion. These tools strive to improve the publication process considering theaforementioned data sources.
Resumo:
The unscented Kalman filter (UKF) is a widely used method in control and time series applications. The UKF suffers from arbitrary parameters necessary for a step known as sigma point placement, causing it to perform poorly in nonlinear problems. We show how to treat sigma point placement in a UKF as a learning problem in a model based view. We demonstrate that learning to place the sigma points correctly from data can make sigma point collapse much less likely. Learning can result in a significant increase in predictive performance over default settings of the parameters in the UKF and other filters designed to avoid the problems of the UKF, such as the GP-ADF. At the same time, we maintain a lower computational complexity than the other methods. We call our method UKF-L. ©2010 IEEE.
Resumo:
The unscented Kalman filter (UKF) is a widely used method in control and time series applications. The UKF suffers from arbitrary parameters necessary for sigma point placement, potentially causing it to perform poorly in nonlinear problems. We show how to treat sigma point placement in a UKF as a learning problem in a model based view. We demonstrate that learning to place the sigma points correctly from data can make sigma point collapse much less likely. Learning can result in a significant increase in predictive performance over default settings of the parameters in the UKF and other filters designed to avoid the problems of the UKF, such as the GP-ADF. At the same time, we maintain a lower computational complexity than the other methods. We call our method UKF-L. © 2011 Elsevier B.V.
Resumo:
Annotation of programs using embedded Domain-Specific Languages (embedded DSLs), such as the program annotation facility for the Java programming language, is a well-known practice in computer science. In this paper we argue for and propose a specialized approach for the usage of embedded Domain-Specific Modelling Languages (embedded DSMLs) in Model-Driven Engineering (MDE) processes that in particular supports automated many-step model transformation chains. It can happen that information defined at some point, using an embedded DSML, is not required in the next immediate transformation step, but in a later one. We propose a new approach of model annotation enabling flexible many-step transformation chains. The approach utilizes a combination of embedded DSMLs, trace models and a megamodel. We demonstrate our approach based on an example MDE process and an industrial case study.
Resumo:
Model transformations are an integral part of model-driven development. Incremental updates are a key execution scenario for transformations in model-based systems, and are especially important for the evolution of such systems. This paper presents a strategy for the incremental maintenance of declarative, rule-based transformation executions. The strategy involves recording dependencies of the transformation execution on information from source models and from the transformation definition. Changes to the source models or the transformation itself can then be directly mapped to their effects on transformation execution, allowing changes to target models to be computed efficiently. This particular approach has many benefits. It supports changes to both source models and transformation definitions, it can be applied to incomplete transformation executions, and a priori knowledge of volatility can be used to further increase the efficiency of change propagation.