817 resultados para flexible learning space
Resumo:
The theoretical recital of the present study it is initiated of the evidence that the work occupies an important space in the man s life in way that the majority of the people works and passes great part of its time inside organizati ons. However, it is verified that the relation between man and work is becoming increasingly disagreement a time that the employees had started to complain work s routines, stress, not use all their potential and inadequate work s conditions. It can be observed by the way of Dejours (1994) studies. Thus, as contribution for the quality of work life s (QWL) studies the research developed here objectified to characterize the public employees quality of work life at EMATER -RN taking as reference an instrumen t of research synthesized from the typical academic literature of the subject. The synthesis of an ampler instrument is a necessity not taken care to the literature that treats on the subject but already perceived by some studies like Moraes et al (1990); Rodrigues (1989); Siqueira & Coleta (1989); Moraes et al (1992); Carvalho & Souza (2003); El -Aouar & Souza (2003) and Mourão, Kilimnick & Fernandes (2005); Adorno, Marques & Borges (2005) amongst others. These studies point out weak points of the existing models in the QWL s literature, as well as they recommend the elaboration of a model more flexible, that contemplates Brazilian cultural characteristics, and that contemplates the entire variable studied in the main existing models. For reach this objectiv e the adopted methodology was characterized as a case study with collected data in qualitative and quantitative way. Questionnaires and comments had been used as sources of evidences. These evidences had been tabulated through of statistical package SPSS ( Statistical Package for Social Science), in which the main technique of multivariate analysis used were the factorial analysis. As for the gotten results, it was verified the grouping of the quality of work life s indicators in 11 factors which are: Work s execution, Individual accomplishment, Work s equity, Relation individual and organization, Work s organization, Adequacy of the remuneration, Relation between head and subordinate, Effectiveness of the communication and the learning, Relation between work and personal life, Participation and Effectiveness of the work processes. Whatever to the characterization of the EMATER -RN s quality of work life it was clearly that to the measure that the satisfaction s evaluation with the QWL in the organization walks to intrinsic factors for extrinsic factors this level of satisfaction goes diminishing what points to the importance to improve these extrinsic factors in the institution. In summary it is possible to conclude that the organization studied has offered a significant set of referring variable to the quality of work life of the individual
Resumo:
The size of online image datasets is constantly increasing. Considering an image dataset with millions of images, image retrieval becomes a seemingly intractable problem for exhaustive similarity search algorithms. Hashing methods, which encodes high-dimensional descriptors into compact binary strings, have become very popular because of their high efficiency in search and storage capacity. In the first part, we propose a multimodal retrieval method based on latent feature models. The procedure consists of a nonparametric Bayesian framework for learning underlying semantically meaningful abstract features in a multimodal dataset, a probabilistic retrieval model that allows cross-modal queries and an extension model for relevance feedback. In the second part, we focus on supervised hashing with kernels. We describe a flexible hashing procedure that treats binary codes and pairwise semantic similarity as latent and observed variables, respectively, in a probabilistic model based on Gaussian processes for binary classification. We present a scalable inference algorithm with the sparse pseudo-input Gaussian process (SPGP) model and distributed computing. In the last part, we define an incremental hashing strategy for dynamic databases where new images are added to the databases frequently. The method is based on a two-stage classification framework using binary and multi-class SVMs. The proposed method also enforces balance in binary codes by an imbalance penalty to obtain higher quality binary codes. We learn hash functions by an efficient algorithm where the NP-hard problem of finding optimal binary codes is solved via cyclic coordinate descent and SVMs are trained in a parallelized incremental manner. For modifications like adding images from an unseen class, we propose an incremental procedure for effective and efficient updates to the previous hash functions. Experiments on three large-scale image datasets demonstrate that the incremental strategy is capable of efficiently updating hash functions to the same retrieval performance as hashing from scratch.
Resumo:
Nowadays, new computers generation provides a high performance that enables to build computationally expensive computer vision applications applied to mobile robotics. Building a map of the environment is a common task of a robot and is an essential part to allow the robots to move through these environments. Traditionally, mobile robots used a combination of several sensors from different technologies. Lasers, sonars and contact sensors have been typically used in any mobile robotic architecture, however color cameras are an important sensor due to we want the robots to use the same information that humans to sense and move through the different environments. Color cameras are cheap and flexible but a lot of work need to be done to give robots enough visual understanding of the scenes. Computer vision algorithms are computational complex problems but nowadays robots have access to different and powerful architectures that can be used for mobile robotics purposes. The advent of low-cost RGB-D sensors like Microsoft Kinect which provide 3D colored point clouds at high frame rates made the computer vision even more relevant in the mobile robotics field. The combination of visual and 3D data allows the systems to use both computer vision and 3D processing and therefore to be aware of more details of the surrounding environment. The research described in this thesis was motivated by the need of scene mapping. Being aware of the surrounding environment is a key feature in many mobile robotics applications from simple robotic navigation to complex surveillance applications. In addition, the acquisition of a 3D model of the scenes is useful in many areas as video games scene modeling where well-known places are reconstructed and added to game systems or advertising where once you get the 3D model of one room the system can add furniture pieces using augmented reality techniques. In this thesis we perform an experimental study of the state-of-the-art registration methods to find which one fits better to our scene mapping purposes. Different methods are tested and analyzed on different scene distributions of visual and geometry appearance. In addition, this thesis proposes two methods for 3d data compression and representation of 3D maps. Our 3D representation proposal is based on the use of Growing Neural Gas (GNG) method. This Self-Organizing Maps (SOMs) has been successfully used for clustering, pattern recognition and topology representation of various kind of data. Until now, Self-Organizing Maps have been primarily computed offline and their application in 3D data has mainly focused on free noise models without considering time constraints. Self-organising neural models have the ability to provide a good representation of the input space. In particular, the Growing Neural Gas (GNG) is a suitable model because of its flexibility, rapid adaptation and excellent quality of representation. However, this type of learning is time consuming, specially for high-dimensional input data. Since real applications often work under time constraints, it is necessary to adapt the learning process in order to complete it in a predefined time. This thesis proposes a hardware implementation leveraging the computing power of modern GPUs which takes advantage of a new paradigm coined as General-Purpose Computing on Graphics Processing Units (GPGPU). Our proposed geometrical 3D compression method seeks to reduce the 3D information using plane detection as basic structure to compress the data. This is due to our target environments are man-made and therefore there are a lot of points that belong to a plane surface. Our proposed method is able to get good compression results in those man-made scenarios. The detected and compressed planes can be also used in other applications as surface reconstruction or plane-based registration algorithms. Finally, we have also demonstrated the goodness of the GPU technologies getting a high performance implementation of a CAD/CAM common technique called Virtual Digitizing.
Resumo:
We present a new radiation scheme for the Oxford Planetary Unified Model System for Venus, suitable for the solar and thermal bands. This new and fast radiative parameterization uses a different approach in the two main radiative wavelength bands: solar radiation (0.1-5.5 mu m) and thermal radiation (1.7-260 mu m). The solar radiation calculation is based on the delta-Eddington approximation (two-stream-type) with an adding layer method. For the thermal radiation case, a code based on an absorptivity/emissivity formulation is used. The new radiative transfer formulation implemented is intended to be computationally light, to allow its incorporation in 3D global circulation models, but still allowing for the calculation of the effect of atmospheric conditions on radiative fluxes. This will allow us to investigate the dynamical-radiative-microphysical feedbacks. The model flexibility can be also used to explore the uncertainties in the Venus atmosphere such as the optical properties in the deep atmosphere or cloud amount. The results of radiative cooling and heating rates and the global-mean radiative-convective equilibrium temperature profiles for different atmospheric conditions are presented and discussed. This new scheme works in an atmospheric column and can be easily implemented in 3D Venus global circulation models. (C) 2014 Elsevier Ltd. All rights reserved.
Resumo:
This study highlights the importance of cognition-affect interaction pathways in the construction of mathematical knowledge. Scientific output demands further research on the conceptual structure underlying such interaction aimed at coping with the high complexity of its interpretation. The paper discusses the effectiveness of using a dynamic model such as that outlined in the Mathematical Working Spaces (MWS) framework, in order to describe the interplay between cognition and affect in the transitions from instrumental to discursive geneses in geometrical reasoning. The results based on empirical data from a teaching experiment at a middle school show that the use of dynamic geometry software favours students’ attitudinal and volitional dimensions and helps them to maintain productive affective pathways, affording greater intellectual independence in mathematical work and interaction with the context that impact learning opportunities in geometric proofs. The reflective and heuristic dimensions of teacher mediation in students’ learning is crucial in the transition from instrumental to discursive genesis and working stability in the Instrumental-Discursive plane of MWS.
Resumo:
Within academic institutions, writing centers are uniquely situated, socially rich sites for exploring learning and literacy. I examine the work of the Michigan Tech Writing Center's UN 1002 World Cultures study teams primarily because student participants and Writing Center coaches are actively engaged in structuring their own learning and meaning-making processes. My research reveals that learning is closely linked to identity formation and leading the teams is an important component of the coaches' educational experiences. I argue that supporting this type of learning requires an expanded understanding of literacy and significant changes to how learning environments are conceptualized and developed. This ethnographic study draws on data collected from recordings and observations of one semester of team sessions, my own experiences as a team coach and UN 1002 teaching assistant, and interviews with Center coaches prior to their graduation. I argue that traditional forms of assessment and analysis emerging from individualized instruction models of learning cannot fully account for the dense configurations of social interactions identified in the Center's program. Instead, I view the Center as an open system and employ social theories of learning and literacy to uncover how the negotiation of meaning in one context influences and is influenced by structures and interactions within as well as beyond its boundaries. I focus on the program design, its enaction in practice, and how engagement in this type of writing center work influences coaches' learning trajectories. I conclude that, viewed as participation in a community of practice, the learning theory informing the program design supports identity formation —a key aspect of learning as argued by Etienne Wenger (1998). The findings of this study challenge misconceptions of peer learning both in writing centers and higher education that relegate peer tutoring to the role of support for individualized models of learning. Instead, this dissertation calls for consideration of new designs that incorporate peer learning as an integral component. Designing learning contexts that cultivate and support the formation of new identities is complex, involves a flexible and opportunistic design structure, and requires the availability of multiple forms of participation and connections across contexts.
Resumo:
Recommender system is a specific type of intelligent systems, which exploits historical user ratings on items and/or auxiliary information to make recommendations on items to the users. It plays a critical role in a wide range of online shopping, e-commercial services and social networking applications. Collaborative filtering (CF) is the most popular approaches used for recommender systems, but it suffers from complete cold start (CCS) problem where no rating record are available and incomplete cold start (ICS) problem where only a small number of rating records are available for some new items or users in the system. In this paper, we propose two recommendation models to solve the CCS and ICS problems for new items, which are based on a framework of tightly coupled CF approach and deep learning neural network. A specific deep neural network SADE is used to extract the content features of the items. The state of the art CF model, timeSVD++, which models and utilizes temporal dynamics of user preferences and item features, is modified to take the content features into prediction of ratings for cold start items. Extensive experiments on a large Netflix rating dataset of movies are performed, which show that our proposed recommendation models largely outperform the baseline models for rating prediction of cold start items. The two proposed recommendation models are also evaluated and compared on ICS items, and a flexible scheme of model retraining and switching is proposed to deal with the transition of items from cold start to non-cold start status. The experiment results on Netflix movie recommendation show the tight coupling of CF approach and deep learning neural network is feasible and very effective for cold start item recommendation. The design is general and can be applied to many other recommender systems for online shopping and social networking applications. The solution of cold start item problem can largely improve user experience and trust of recommender systems, and effectively promote cold start items.
Resumo:
Integration, inclusion, and equity constitute fundamental dimensions of democracy in post-World War II societies and their institutions. The study presented here reports upon the ways in which individuals and institutions both use and account for the roles that technologies, including ICT, play in disabling and enabling access for learning in higher education for all. Technological innovations during the 20th and 21st centuries, including ICT, have been heralded as holding significant promise for revolutionizing issues of access in societal institutions like schools, healthcare services, etc. (at least in the global North). Taking a socially oriented perspective, the study presented in this paper focuses on an ethnographically framed analysis of two datasets that critically explores the role that technologies, including ICT, play in higher education for individuals who are “differently abled” and who constitute a variation on a continuum of capabilities. Functionality as a dimension of everyday life in higher education in the 21st century is explored through the analysis of (i) case studies of two “differently abled” students in Sweden and (ii) current support services at universities in Sweden. The findings make visible the work that institutions and their members do through analyses of the organization of time and space and the use of technologies in institutional settings against the backdrop of individuals’ accountings and life trajectories. This study also highlights the relevance of multi-scale data analyses for revisiting the ways in which identity positions become framed or understood within higher education.
Resumo:
Program in Rural Education” is presented to the national and international community as a high level graduate program, whose purpose is to improve the quality of formal and informal educational processes within the area through an innovative pedagogical model that includes itinerancy along the isthmus, as well as in site and distance learning with a virtual component. Furthermore, this program bases its practice on the development of a flexible model, on inclusiveness and also on a differentiated methodological and evaluative proposal that recovers the knowledge produced in every country around de issue of rural education.
Resumo:
The inclusion of online elements in learning environments is becoming commonplace in Post Compulsory Education. A variety of research into the value of such elements is available, and this study aims to add further evidence by looking specifically at the use of collaborative technologies such as online discussion forums and wikis to encourage higher order thinking and self-sufficient learning. In particular, the research examines existing pedagogical models including Salmon’s five-stage model, along with other relevant literature. A case study of adult learners in community-based learning centres forms the basis of the research, and as a result of the findings, an arrow model is suggested as a framework for online collaboration that emphasises the learner, mentions pre-course preparation and then includes three main phases of activity: post, interact and critique. This builds on Salmon’s five-stage model and has the benefit of being flexible and responsive, as well as allowing for further development beyond the model, particularly in a blended learning environment.
Resumo:
This paper presents a best-practice model for the redesign of virtual learning environments (VLEs) within creative arts to augment blended learning. In considering a blended learning best-practice model, three factors should be considered: the conscious and active human intervention, good learning design and pedagogical input, and the sensitive handling of the process by trained professionals. This study is based on a comprehensive VLE content analysis conducted across two academic schools within the creative arts at one Post-92 higher education (HE) institution. It was found that four main barriers affect the use of the VLE within creative arts: lack of flexibility in relation to navigation and interface, time in developing resources, competency level of tutors (confidence in developing online resources balanced against other flexible open resources) and factors affecting the engagement of ‘digital residents’. The experimental approach adopted in this study involved a partnership between the learning technology advisor and academic staff, which resulted in a VLE best-practice model that focused directly on improving aesthetics and navigation. The approach adopted in this study allowed a purposive sample of academic staff to engage as participants, stepping back cognitively from their routine practices in relation to their use of the VLE and questioning approaches to how they embed the VLE to support teaching and learning. The model presented in this paper identified a potential solution to overcome the challenges of integrating the VLE within creative arts. The findings of this study demonstrate positive impact on staff and student experience and provide a sustainable model of good practice for the redesign of the VLE within creative disciplines.
Resumo:
This article explores academics’ writing practices, focusing on the ways in which they use digital platforms in their processes of collaborative learning. It draws on interview data from a research project that has involved working closely with academics across different disciplines and institutions to explore their writing practices, understanding academic literacies as situated social practices. The article outlines the characteristics of academics’ ongoing professional learning, demonstrating the importance of collaborations on specific projects in generating learning in relation to using digital platforms and for sharing and collaborating on scholarly writing. A very wide range of digital platforms have been identified by these academics, enabling new kinds of collaboration across time and space on writing and research; but challenges around online learning are also identified, particularly the dangers of engaging in learning in public, the pressures of ‘always-on’-ness and the different values systems around publishing in different forums.
Resumo:
Embedding intelligence in extreme edge devices allows distilling raw data acquired from sensors into actionable information, directly on IoT end-nodes. This computing paradigm, in which end-nodes no longer depend entirely on the Cloud, offers undeniable benefits, driving a large research area (TinyML) to deploy leading Machine Learning (ML) algorithms on micro-controller class of devices. To fit the limited memory storage capability of these tiny platforms, full-precision Deep Neural Networks (DNNs) are compressed by representing their data down to byte and sub-byte formats, in the integer domain. However, the current generation of micro-controller systems can barely cope with the computing requirements of QNNs. This thesis tackles the challenge from many perspectives, presenting solutions both at software and hardware levels, exploiting parallelism, heterogeneity and software programmability to guarantee high flexibility and high energy-performance proportionality. The first contribution, PULP-NN, is an optimized software computing library for QNN inference on parallel ultra-low-power (PULP) clusters of RISC-V processors, showing one order of magnitude improvements in performance and energy efficiency, compared to current State-of-the-Art (SoA) STM32 micro-controller systems (MCUs) based on ARM Cortex-M cores. The second contribution is XpulpNN, a set of RISC-V domain specific instruction set architecture (ISA) extensions to deal with sub-byte integer arithmetic computation. The solution, including the ISA extensions and the micro-architecture to support them, achieves energy efficiency comparable with dedicated DNN accelerators and surpasses the efficiency of SoA ARM Cortex-M based MCUs, such as the low-end STM32M4 and the high-end STM32H7 devices, by up to three orders of magnitude. To overcome the Von Neumann bottleneck while guaranteeing the highest flexibility, the final contribution integrates an Analog In-Memory Computing accelerator into the PULP cluster, creating a fully programmable heterogeneous fabric that demonstrates end-to-end inference capabilities of SoA MobileNetV2 models, showing two orders of magnitude performance improvements over current SoA analog/digital solutions.
Resumo:
The study of random probability measures is a lively research topic that has attracted interest from different fields in recent years. In this thesis, we consider random probability measures in the context of Bayesian nonparametrics, where the law of a random probability measure is used as prior distribution, and in the context of distributional data analysis, where the goal is to perform inference given avsample from the law of a random probability measure. The contributions contained in this thesis can be subdivided according to three different topics: (i) the use of almost surely discrete repulsive random measures (i.e., whose support points are well separated) for Bayesian model-based clustering, (ii) the proposal of new laws for collections of random probability measures for Bayesian density estimation of partially exchangeable data subdivided into different groups, and (iii) the study of principal component analysis and regression models for probability distributions seen as elements of the 2-Wasserstein space. Specifically, for point (i) above we propose an efficient Markov chain Monte Carlo algorithm for posterior inference, which sidesteps the need of split-merge reversible jump moves typically associated with poor performance, we propose a model for clustering high-dimensional data by introducing a novel class of anisotropic determinantal point processes, and study the distributional properties of the repulsive measures, shedding light on important theoretical results which enable more principled prior elicitation and more efficient posterior simulation algorithms. For point (ii) above, we consider several models suitable for clustering homogeneous populations, inducing spatial dependence across groups of data, extracting the characteristic traits common to all the data-groups, and propose a novel vector autoregressive model to study of growth curves of Singaporean kids. Finally, for point (iii), we propose a novel class of projected statistical methods for distributional data analysis for measures on the real line and on the unit-circle.
Resumo:
The Cherenkov Telescope Array (CTA) will be the next-generation ground-based observatory to study the universe in the very-high-energy domain. The observatory will rely on a Science Alert Generation (SAG) system to analyze the real-time data from the telescopes and generate science alerts. The SAG system will play a crucial role in the search and follow-up of transients from external alerts, enabling multi-wavelength and multi-messenger collaborations. It will maximize the potential for the detection of the rarest phenomena, such as gamma-ray bursts (GRBs), which are the science case for this study. This study presents an anomaly detection method based on deep learning for detecting gamma-ray burst events in real-time. The performance of the proposed method is evaluated and compared against the Li&Ma standard technique in two use cases of serendipitous discoveries and follow-up observations, using short exposure times. The method shows promising results in detecting GRBs and is flexible enough to allow real-time search for transient events on multiple time scales. The method does not assume background nor source models and doe not require a minimum number of photon counts to perform analysis, making it well-suited for real-time analysis. Future improvements involve further tests, relaxing some of the assumptions made in this study as well as post-trials correction of the detection significance. Moreover, the ability to detect other transient classes in different scenarios must be investigated for completeness. The system can be integrated within the SAG system of CTA and deployed on the onsite computing clusters. This would provide valuable insights into the method's performance in a real-world setting and be another valuable tool for discovering new transient events in real-time. Overall, this study makes a significant contribution to the field of astrophysics by demonstrating the effectiveness of deep learning-based anomaly detection techniques for real-time source detection in gamma-ray astronomy.