845 resultados para Learning Models


Relevância:

30.00% 30.00%

Publicador:

Resumo:

The rapid growth of virtualized data centers and cloud hosting services is making the management of physical resources such as CPU, memory, and I/O bandwidth in data center servers increasingly important. Server management now involves dealing with multiple dissimilar applications with varying Service-Level-Agreements (SLAs) and multiple resource dimensions. The multiplicity and diversity of resources and applications are rendering administrative tasks more complex and challenging. This thesis aimed to develop a framework and techniques that would help substantially reduce data center management complexity.^ We specifically addressed two crucial data center operations. First, we precisely estimated capacity requirements of client virtual machines (VMs) while renting server space in cloud environment. Second, we proposed a systematic process to efficiently allocate physical resources to hosted VMs in a data center. To realize these dual objectives, accurately capturing the effects of resource allocations on application performance is vital. The benefits of accurate application performance modeling are multifold. Cloud users can size their VMs appropriately and pay only for the resources that they need; service providers can also offer a new charging model based on the VMs performance instead of their configured sizes. As a result, clients will pay exactly for the performance they are actually experiencing; on the other hand, administrators will be able to maximize their total revenue by utilizing application performance models and SLAs. ^ This thesis made the following contributions. First, we identified resource control parameters crucial for distributing physical resources and characterizing contention for virtualized applications in a shared hosting environment. Second, we explored several modeling techniques and confirmed the suitability of two machine learning tools, Artificial Neural Network and Support Vector Machine, to accurately model the performance of virtualized applications. Moreover, we suggested and evaluated modeling optimizations necessary to improve prediction accuracy when using these modeling tools. Third, we presented an approach to optimal VM sizing by employing the performance models we created. Finally, we proposed a revenue-driven resource allocation algorithm which maximizes the SLA-generated revenue for a data center.^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Il riconoscimento delle gesture è un tema di ricerca che sta acquisendo sempre più popolarità, specialmente negli ultimi anni, grazie ai progressi tecnologici dei dispositivi embedded e dei sensori. Lo scopo di questa tesi è quello di utilizzare alcune tecniche di machine learning per realizzare un sistema in grado di riconoscere e classificare in tempo reale i gesti delle mani, a partire dai segnali mioelettrici (EMG) prodotti dai muscoli. Inoltre, per consentire il riconoscimento di movimenti spaziali complessi, verranno elaborati anche segnali di tipo inerziale, provenienti da una Inertial Measurement Unit (IMU) provvista di accelerometro, giroscopio e magnetometro. La prima parte della tesi, oltre ad offrire una panoramica sui dispositivi wearable e sui sensori, si occuperà di analizzare alcune tecniche per la classificazione di sequenze temporali, evidenziandone vantaggi e svantaggi. In particolare, verranno considerati approcci basati su Dynamic Time Warping (DTW), Hidden Markov Models (HMM), e reti neurali ricorrenti (RNN) di tipo Long Short-Term Memory (LSTM), che rappresentano una delle ultime evoluzioni nel campo del deep learning. La seconda parte, invece, riguarderà il progetto vero e proprio. Verrà impiegato il dispositivo wearable Myo di Thalmic Labs come caso di studio, e saranno applicate nel dettaglio le tecniche basate su DTW e HMM per progettare e realizzare un framework in grado di eseguire il riconoscimento real-time di gesture. Il capitolo finale mostrerà i risultati ottenuti (fornendo anche un confronto tra le tecniche analizzate), sia per la classificazione di gesture isolate che per il riconoscimento in tempo reale.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Acknowledgements One of us (T. B.) acknowledges many interesting discussions on coupled maps with Professor C. Tsallis. We are also grateful to the anonymous referees for their constructive feedback that helped us improve the manuscript and to the HPCS Laboratory of the TEI of Western Greece for providing the computer facilities where all our simulations were performed. C. G. A. was partially supported by the “EPSRC EP/I032606/1” grant of the University of Aberdeen. This research has been co-financed by the European Union (European Social Fund - ESF) and Greek national funds through the Operational Program “Education and Lifelong Learning” of the National Strategic Reference Framework (NSRF) - Research Funding Program: THALES - Investing in knowledge society through the European Social Fund.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Acknowledgements One of us (T. B.) acknowledges many interesting discussions on coupled maps with Professor C. Tsallis. We are also grateful to the anonymous referees for their constructive feedback that helped us improve the manuscript and to the HPCS Laboratory of the TEI of Western Greece for providing the computer facilities where all our simulations were performed. C. G. A. was partially supported by the “EPSRC EP/I032606/1” grant of the University of Aberdeen. This research has been co-financed by the European Union (European Social Fund - ESF) and Greek national funds through the Operational Program “Education and Lifelong Learning” of the National Strategic Reference Framework (NSRF) - Research Funding Program: THALES - Investing in knowledge society through the European Social Fund.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Subspaces and manifolds are two powerful models for high dimensional signals. Subspaces model linear correlation and are a good fit to signals generated by physical systems, such as frontal images of human faces and multiple sources impinging at an antenna array. Manifolds model sources that are not linearly correlated, but where signals are determined by a small number of parameters. Examples are images of human faces under different poses or expressions, and handwritten digits with varying styles. However, there will always be some degree of model mismatch between the subspace or manifold model and the true statistics of the source. This dissertation exploits subspace and manifold models as prior information in various signal processing and machine learning tasks.

A near-low-rank Gaussian mixture model measures proximity to a union of linear or affine subspaces. This simple model can effectively capture the signal distribution when each class is near a subspace. This dissertation studies how the pairwise geometry between these subspaces affects classification performance. When model mismatch is vanishingly small, the probability of misclassification is determined by the product of the sines of the principal angles between subspaces. When the model mismatch is more significant, the probability of misclassification is determined by the sum of the squares of the sines of the principal angles. Reliability of classification is derived in terms of the distribution of signal energy across principal vectors. Larger principal angles lead to smaller classification error, motivating a linear transform that optimizes principal angles. This linear transformation, termed TRAIT, also preserves some specific features in each class, being complementary to a recently developed Low Rank Transform (LRT). Moreover, when the model mismatch is more significant, TRAIT shows superior performance compared to LRT.

The manifold model enforces a constraint on the freedom of data variation. Learning features that are robust to data variation is very important, especially when the size of the training set is small. A learning machine with large numbers of parameters, e.g., deep neural network, can well describe a very complicated data distribution. However, it is also more likely to be sensitive to small perturbations of the data, and to suffer from suffer from degraded performance when generalizing to unseen (test) data.

From the perspective of complexity of function classes, such a learning machine has a huge capacity (complexity), which tends to overfit. The manifold model provides us with a way of regularizing the learning machine, so as to reduce the generalization error, therefore mitigate overfiting. Two different overfiting-preventing approaches are proposed, one from the perspective of data variation, the other from capacity/complexity control. In the first approach, the learning machine is encouraged to make decisions that vary smoothly for data points in local neighborhoods on the manifold. In the second approach, a graph adjacency matrix is derived for the manifold, and the learned features are encouraged to be aligned with the principal components of this adjacency matrix. Experimental results on benchmark datasets are demonstrated, showing an obvious advantage of the proposed approaches when the training set is small.

Stochastic optimization makes it possible to track a slowly varying subspace underlying streaming data. By approximating local neighborhoods using affine subspaces, a slowly varying manifold can be efficiently tracked as well, even with corrupted and noisy data. The more the local neighborhoods, the better the approximation, but the higher the computational complexity. A multiscale approximation scheme is proposed, where the local approximating subspaces are organized in a tree structure. Splitting and merging of the tree nodes then allows efficient control of the number of neighbourhoods. Deviation (of each datum) from the learned model is estimated, yielding a series of statistics for anomaly detection. This framework extends the classical {\em changepoint detection} technique, which only works for one dimensional signals. Simulations and experiments highlight the robustness and efficacy of the proposed approach in detecting an abrupt change in an otherwise slowly varying low-dimensional manifold.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

An important aspect of globalisation/Americanisation is, prima facia, the global export of televisual products such as Sesame Street, Barney, etc. that are explicitly concerned with cultivating elementary forms of organisational life. Thus, it is surprising that organization studies has been virtually silent on childhood and pedagogy. This lacuna needs filling especially because the development of a post-national, cosmopolitan society problematises existing pedagogical models. In this paper we argue that cosmopolitanism requires a pedagogy that is centred on the Lack and the mythic figure of the Trickster. We explore this through an analysis of children’s stories, including Benjamin’s radio broadcasts for children, Sesame Street and Dr Seuss.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Bayesian methods offer a flexible and convenient probabilistic learning framework to extract interpretable knowledge from complex and structured data. Such methods can characterize dependencies among multiple levels of hidden variables and share statistical strength across heterogeneous sources. In the first part of this dissertation, we develop two dependent variational inference methods for full posterior approximation in non-conjugate Bayesian models through hierarchical mixture- and copula-based variational proposals, respectively. The proposed methods move beyond the widely used factorized approximation to the posterior and provide generic applicability to a broad class of probabilistic models with minimal model-specific derivations. In the second part of this dissertation, we design probabilistic graphical models to accommodate multimodal data, describe dynamical behaviors and account for task heterogeneity. In particular, the sparse latent factor model is able to reveal common low-dimensional structures from high-dimensional data. We demonstrate the effectiveness of the proposed statistical learning methods on both synthetic and real-world data.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

To provide biological insights into transcriptional regulation, a couple of groups have recently presented models relating the promoter DNA-bound transcription factors (TFs) to downstream gene’s mean transcript level or transcript production rates over time. However, transcript production is dynamic in response to changes of TF concentrations over time. Also, TFs are not the only factors binding to promoters; other DNA binding factors (DBFs) bind as well, especially nucleosomes, resulting in competition between DBFs for binding at same genomic location. Additionally, not only TFs, but also some other elements regulate transcription. Within core promoter, various regulatory elements influence RNAPII recruitment, PIC formation, RNAPII searching for TSS, and RNAPII initiating transcription. Moreover, it is proposed that downstream from TSS, nucleosomes resist RNAPII elongation.

Here, we provide a machine learning framework to predict transcript production rates from DNA sequences. We applied this framework in the S. cerevisiae yeast for two scenarios: a) to predict the dynamic transcript production rate during the cell cycle for native promoters; b) to predict the mean transcript production rate over time for synthetic promoters. As far as we know, our framework is the first successful attempt to have a model that can predict dynamic transcript production rates from DNA sequences only: with cell cycle data set, we got Pearson correlation coefficient Cp = 0.751 and coefficient of determination r2 = 0.564 on test set for predicting dynamic transcript production rate over time. Also, for DREAM6 Gene Promoter Expression Prediction challenge, our fitted model outperformed all participant teams, best of all teams, and a model combining best team’s k-mer based sequence features and another paper’s biologically mechanistic features, in terms of all scoring metrics.

Moreover, our framework shows its capability of identifying generalizable fea- tures by interpreting the highly predictive models, and thereby provide support for associated hypothesized mechanisms about transcriptional regulation. With the learned sparse linear models, we got results supporting the following biological insights: a) TFs govern the probability of RNAPII recruitment and initiation possibly through interactions with PIC components and transcription cofactors; b) the core promoter amplifies the transcript production probably by influencing PIC formation, RNAPII recruitment, DNA melting, RNAPII searching for and selecting TSS, releasing RNAPII from general transcription factors, and thereby initiation; c) there is strong transcriptional synergy between TFs and core promoter elements; d) the regulatory elements within core promoter region are more than TATA box and nucleosome free region, suggesting the existence of still unidentified TAF-dependent and cofactor-dependent core promoter elements in yeast S. cerevisiae; e) nucleosome occupancy is helpful for representing +1 and -1 nucleosomes’ regulatory roles on transcription.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The paper reports on a study of design studio culture from a student perspective. Learning in design studio culture has been theorised variously as a signature pedagogy emulating professional practice models, as a community of practice and as a form of problem-based learning, all largely based on the study of teaching events in studio. The focus of this research has extended beyond formally recognized activities to encompass the student’s experience of their social and community networks, working places and study set-ups, to examine how these have contributed to studio culture and how there have been supported by studio teaching. Semi-structured interviews with final year undergraduate students of architecture formed the basis of the study using an interpretivist approach informed by Actor-network theory, with studio culture featured as the focal actor, enrolling students and engaging with other actors, together constituting an actor-network of studio culture. The other actors included social community patterns and activities; the numerous working spaces (including but not limited to the studio space itself); the equipment, tools of trade and material pre-requisites for working; the portfolio enrolling the other actors to produce work for it; and the various formal and informal events associated with the course itself. Studio culture is a highly charged social arena: The question is how, and in particular, which aspects of it support learning? Theoretical models of situated learning and communities of practice models have informed the analysis, with Bourdieu’s theory of practice, and his interrelated concepts of habitus, field and capital providing a means of relating individually acquired habits and modes of working to social contexts. Bourdieu’s model of habitus involves the externalisation through the social realm of habits and knowledge previously internalised. It is therefore a useful model for considering whole individual learning activities; shared repertoires and practices located in the social realm. The social milieu of the studio provides a scene for the exercise and display of ‘practicing’ and the accumulation of a form of ‘practicing-capital’. This capital is a property of the social milieu rather than the space, so working or practicing in the company of others (in space and through social media) becomes a more valued aspect of studio than space or facilities alone. This practicing-capital involves the acquisition of a habitus of studio culture, with the transformation of physical practices or habits into social dispositions, acquiring social capital (driving the social milieu) and cultural capital (practicing-knowledge) in the process. The research drew on students’ experiences, and their practicing ‘getting a feel for the game’ by exploring the limits or boundaries of the field of studio culture. The research demonstrated that a notional studio community was in effect a social context for supporting learning; a range of settings to explore and test out newly internalised knowledge, demonstrate or display ideas, modes of thinking and practicing. The study presents a nuanced interpretation of how students relate to a studio culture that involves a notional community, and a developing habitus within a field of practicing that extends beyond teaching scenarios.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The hypothesis that the same educational objective, raised as cooperative or collaborative learning in university teaching does not affect students’ perceptions of the learning model, leads this study. It analyses the reflections of two students groups of engineering that shared the same educational goals implemented through two different methodological active learning strategies: Simulation as cooperative learning strategy and Problem-based Learning as a collaborative one. The different number of participants per group (eighty-five and sixty-five, respectively) as well as the use of two active learning strategies, either collaborative or cooperative, did not show differences in the results from a qualitative perspective.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Video games have become one of the largest entertainment industries, and their power to capture the attention of players worldwide soon prompted the idea of using games to improve education. However, these educational games, commonly referred to as serious games, face different challenges when brought into the classroom, ranging from pragmatic issues (e.g. a high development cost) to deeper educational issues, including a lack of understanding of how the students interact with the games and how the learning process actually occurs. This chapter explores the potential of data-driven approaches to improve the practical applicability of serious games. Existing work done by the entertainment and learning industries helps to build a conceptual model of the tasks required to analyze player interactions in serious games (gaming learning analytics or GLA). The chapter also describes the main ongoing initiatives to create reference GLA infrastructures and their connection to new emerging specifications from the educational technology field. Finally, it explores how this data-driven GLA will help in the development of a new generation of more effective educational games and new business models that will support their expansion. This results in additional ethical implications, which are discussed at the end of the chapter.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

For a structural engineer, effective communication and interaction with architects cannot be underestimated as a key skill to success throughout their professional career. Structural engineers and architects have to share a common language and understanding of each other in order to achieve the most desirable architectural and structural designs. This interaction and engagement develops during their professional career but needs to be nurtured during their undergraduate studies. The objective of this paper is to present the strategies employed to engage higher order thinking in structural engineering students in order to help them solve complex problem-based learning (PBL) design scenarios presented by architecture students. The strategies employed were applied in the experimental setting of an undergraduate module in structural engineering at Queen’s University Belfast in the UK. The strategies employed were active learning to engage with content knowledge, the use of physical conceptual structural models to reinforce key concepts and finally, reinforcing the need for hand sketching of ideas to promote higher order problem-solving. The strategies employed were evaluated through student survey, student feedback and module facilitator (this author) reflection. The strategies were qualitatively perceived by the tutor and quantitatively evaluated by students in a cross-sectional study to help interaction with the architecture students, aid interdisciplinary learning and help students creatively solve problems (through higher order thinking). The students clearly enjoyed this module and in particular interacting with structural engineering tutors and students from another discipline

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Thesis (Master's)--University of Washington, 2016-08

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Thesis (Ph.D.)--University of Washington, 2016-08

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The continuous advancement in computing, together with the decline in its cost, has resulted in technology becoming ubiquitous (Arbaugh, 2008, Gros, 2007). Technology is growing and is part of our lives in almost every respect, including the way we learn. Technology helps to collapse time and space in learning. For example, technology allows learners to engage with their instructors synchronously, in real time and also asynchronously, by enabling sessions to be recorded. Space and distance is no longer an issue provided there is adequate bandwidth, which determines the most appropriate format such text, audio or video. Technology has revolutionised the way learners learn; courses are designed; and ‘lessons’ are delivered, and continues to do so. The learning process can be made vastly more efficient as learners have knowledge at their fingertips, and unfamiliar concepts can be easily searched and an explanation found in seconds. Technology has also enabled learning to be more flexible, as learners can learn anywhere; at any time; and using different formats, e.g. text or audio. From the perspective of the instructors and L&D providers, technology offers these same advantages, plus easy scalability. Administratively, preparatory work can be undertaken more quickly even whilst student numbers grow. Learners from far and new locations can be easily accommodated. In addition, many technologies can be easily scaled to accommodate new functionality and/ or other new technologies. ‘Designing and Developing Digital and Blended Learning Solutions’ (5DBS), has been developed to recognise the growing importance of technology in L&D. This unit contains four learning outcomes and two assessment criteria, which is the same for all other units, besides Learning Outcome 3 which has three assessment criteria. The four learning outcomes in this unit are: • Learning Outcome 1: Understand current digital technologies and their contribution to learning and development solutions; • Learning Outcome 2: Be able to design blended learning solutions that make appropriate use of new technologies alongside more traditional approaches; • Learning Outcome 3: Know about the processes involved in designing and developing digital learning content efficiently and what makes for engaging and effective digital learning content; • Learning Outcome 4: Understand the issues involved in the successful implementation of digital and blended learning solutions. Each learning outcome is an individual chapter and each assessment unit is allocated its own sections within the respective chapters. This first chapter addresses the first learning outcome, which has two assessment criteria: summarise the range of currently available learning technologies; critically assess a learning requirement to determine the contribution that could be made through the use of learning technologies. The introduction to chapter one is in Section 1.0. Chapter 2 discusses the design of blended learning solutions in consideration of how digital learning technologies may support face-to-face and online delivery. Three learning theory sets: behaviourism; cognitivism; constructivism, are introduced, and the implication of each set of theory on instructional design for blended learning discussed. Chapter 3 centres on how relevant digital learning content may be created. This chapter includes a review of the key roles, tools and processes that are involved in developing digital learning content. Finally, Chapter 4 concerns delivery and implementation of digital and blended learning solutions. This chapter surveys the key formats and models used to inform the configuration of virtual learning environment software platforms. In addition, various software technologies which may be important in creating a VLE ecosystem that helps to enhance the learning experience, are outlined. We introduce the notion of personal learning environment (PLE), which has emerged from the democratisation of learning. We also review the roles, tools, standards and processes that L&D practitioners need to consider within a delivery and implementation of digital and blended learning solution.