979 resultados para Hierarchical stochastic learning


Relevância:

30.00% 30.00%

Publicador:

Resumo:

The main purpose of this dissertation is to assess the relation between municipal benchmarking and organisational learning with a specific emphasis on benchlearning and performance within municipalities and between groups of municipalities in the building and housing sector in the Netherlands. The first and main conclusion is that this relation exists, but that the relative success of different approaches to dimensions of change and organisational learning are a key explanatory factor for differences in the success of benchlearning. Seven other important conclusions could be derived from the empirical research. First, a combination of interpretative approaches at the group level with a mixture of hierarchical and network strategies, positively influences benchlearning. Second, interaction among professionals at the inter-organisational level strengthens benchlearning. Third, stimulating supporting factors can be seen as a more important strategy to strengthen benchlearning than pulling down barriers. Fourth, in order to facilitate benchlearning, intrinsic motivation and communication skills matter, and are supported by a high level of cooperation (i.e., team work), a flat organisational structure and interactions between individuals. Fifth, benchlearning is facilitated by a strategy that is based on a balanced use of episodic (emergent) and systemic (deliberate) forms of power. Sixth, high levels of benchlearning will be facilitated by an analyser or prospector strategic stance. Prospectors and analysers reach a different learning outcome than defenders and reactors. Whereas analysers and prospectors are willing to change policies when it is perceived as necessary, the strategic stances of defenders and reactors result in narrow process improvements (i.e., single-loop learning). Seventh, performance improvement is influenced by functional perceptions towards performance, and these perceptions ultimately influence the elements adopted. This research shows that efforts aimed at benchlearning and ultimately improved service delivery, should be directed to a multi-level and multi-dimensional approach addressing the context, content and process of dimensions of change and organisational learning.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The purpose of this study was to explore whether the relationship between transformational leadership and innovative behaviour is explained via the mediating role of team learning, or whether instead team cohesion mediates this relationship. Using survey data from 158 professionals within 21 teams in the Dutch healthcare context, we tested by means of hierarchical regression analyses: (a) the relationship between transformational leadership and innovative behaviour; (b) whether team learning or cohesion mediates this relationship; and (c) the relationship between team learning and cohesion, in relation to transformational leadership. Results showed that transformational leadership is positively related to innovative behaviour and that both cohesion and team learning mediate this relationship, with team learning being the strongest mediator. Addressing a neglected area, our study provides evidence to show that managers who enhance team learning are likely to maximise employees' scope for engaging in innovative behaviours. © 2012 Inderscience Enterprises Ltd.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Networked Learning, e-Learning and Technology Enhanced Learning have each been defined in different ways, as people's understanding about technology in education has developed. Yet each could also be considered as a terminology competing for a contested conceptual space. Theoretically this can be a ‘fertile trans-disciplinary ground for represented disciplines to affect and potentially be re-orientated by others’ (Parchoma and Keefer, 2012), as differing perspectives on terminology and subject disciplines yield new understandings. Yet when used in government policy texts to describe connections between humans, learning and technology, terms tend to become fixed in less fertile positions linguistically. A deceptively spacious policy discourse that suggests people are free to make choices conceals an economically-based assumption that implementing new technologies, in themselves, determines learning. Yet it actually narrows choices open to people as one route is repeatedly in the foreground and humans are not visibly involved in it. An impression that the effective use of technology for endless improvement is inevitable cuts off critical social interactions and new knowledge for multiple understandings of technology in people's lives. This paper explores some findings from a corpus-based Critical Discourse Analysis of UK policy for educational technology during the last 15 years, to help to illuminate the choices made. This is important when through political economy, hierarchical or dominant neoliberal logic promotes a single ‘universal model’ of technology in education, without reference to a wider social context (Rustin, 2013). Discourse matters, because it can ‘mould identities’ (Massey, 2013) in narrow, objective economically-based terms which 'colonise discourses of democracy and student-centredness' (Greener and Perriton, 2005:67). This undermines subjective social, political, material and relational (Jones, 2012: 3) contexts for those learning when humans are omitted. Critically confronting these structures is not considered a negative activity. Whilst deterministic discourse for educational technology may leave people unconsciously restricted, I argue that, through a close analysis, it offers a deceptively spacious theoretical tool for debate about the wider social and economic context of educational technology. Methodologically it provides insights about ways technology, language and learning intersect across disciplinary borders (Giroux, 1992), as powerful, mutually constitutive elements, ever-present in networked learning situations. In sharing a replicable approach for linguistic analysis of policy discourse I hope to contribute to visions others have for a broader theoretical underpinning for educational technology, as a developing field of networked knowledge and research (Conole and Oliver, 2002; Andrews, 2011).

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Results of numerical experiments are introduced. Experiments were carried out by means of computer simulation on olfactory bulb for the purpose of checking of thinking mechanisms conceptual model, introduced in [2]. Key role of quasisymbol neurons in processes of pattern identification, existence of mental view, functions of cyclic connections between symbol and quasisymbol neurons as short-term memory, important role of synaptic plasticity in learning processes are confirmed numerically. Correctness of fundamental ideas put in base of conceptual model is confirmed on olfactory bulb at quantitative level.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Formal grammars can used for describing complex repeatable structures such as DNA sequences. In this paper, we describe the structural composition of DNA sequences using a context-free stochastic L-grammar. L-grammars are a special class of parallel grammars that can model the growth of living organisms, e.g. plant development, and model the morphology of a variety of organisms. We believe that parallel grammars also can be used for modeling genetic mechanisms and sequences such as promoters. Promoters are short regulatory DNA sequences located upstream of a gene. Detection of promoters in DNA sequences is important for successful gene prediction. Promoters can be recognized by certain patterns that are conserved within a species, but there are many exceptions which makes the promoter recognition a complex problem. We replace the problem of promoter recognition by induction of context-free stochastic L-grammar rules, which are later used for the structural analysis of promoter sequences. L-grammar rules are derived automatically from the drosophila and vertebrate promoter datasets using a genetic programming technique and their fitness is evaluated using a Support Vector Machine (SVM) classifier. The artificial promoter sequences generated using the derived L- grammar rules are analyzed and compared with natural promoter sequences.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Existing approaches to quality estimation of e-learning systems are analyzed. The “layered” approach for quality estimation of e-learning systems enhanced with learning process modeling and simulation is presented. The method of quality estimation using learning process modeling and quality criteria are suggested. The learning process model based on extended colored stochastic Petri net is described. The method has been implemented in the automated system of quality estimation of e-learning systems named “QuAdS”. Results of approbation of the developed method and quality criteria are shown. We argue that using learning process modeling for quality estimation simplifies identifying lacks of an e-learning system for an expert.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Adaptive critic methods have common roots as generalizations of dynamic programming for neural reinforcement learning approaches. Since they approximate the dynamic programming solutions, they are potentially suitable for learning in noisy, nonlinear and nonstationary environments. In this study, a novel probabilistic dual heuristic programming (DHP) based adaptive critic controller is proposed. Distinct to current approaches, the proposed probabilistic (DHP) adaptive critic method takes uncertainties of forward model and inverse controller into consideration. Therefore, it is suitable for deterministic and stochastic control problems characterized by functional uncertainty. Theoretical development of the proposed method is validated by analytically evaluating the correct value of the cost function which satisfies the Bellman equation in a linear quadratic control problem. The target value of the critic network is then calculated and shown to be equal to the analytically derived correct value.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Technology discloses man’s mode of dealing with Nature, the process of production by which he sustains his life, and thereby also lays bare the mode of formation of his social relations, and of the mental conceptions that flow from them (Marx, 1990: 372) My thesis is a Sociological analysis of UK policy discourse for educational technology during the last 15 years. My framework is a dialogue between the Marxist-based critical social theory of Lieras and a corpus-based Critical Discourse Analysis (CDA) of UK policy for Technology Enhanced Learning (TEL) in higher education. Embedded in TEL is a presupposition: a deterministic assumption that technology has enhanced learning. This conceals a necessary debate that reminds us it is humans that design learning, not technology. By omitting people, TEL provides a vehicle for strong hierarchical or neoliberal, agendas to make simplified claims politically, in the name of technology. My research has two main aims: firstly, I share a replicable, mixed methodological approach for linguistic analysis of the political discourse of TEL. Quantitatively, I examine patterns in my corpus to question forms of ‘use’ around technology that structure a rigid basic argument which ‘enframes’ educational technology (Heidegger, 1977: 38). In a qualitative analysis of findings, I ask to what extent policy discourse evaluates technology in one way, to support a Knowledge Based Economy (KBE) in a political economy of neoliberalism (Jessop 2004, Fairclough 2006). If technology is commodified as an external enhancement, it is expected to provide an ‘exchange value’ for learners (Marx, 1867). I therefore examine more closely what is prioritised and devalued in these texts. Secondly, I disclose a form of austerity in the discourse where technology, as an abstract force, undertakes tasks usually ascribed to humans (Lieras, 1996, Brey, 2003:2). This risks desubjectivisation, loss of power and limits people’s relationships with technology and with each other. A view of technology in political discourse as complete without people closes possibilities for broader dialectical (Fairclough, 2001, 2007) and ‘convivial’ (Illich, 1973) understandings of the intimate, material practice of engaging with technology in education. In opening the ‘black box’ of TEL via CDA I reveal talking points that are otherwise concealed. This allows me as to be reflexive and self-critical through praxis, to confront my own assumptions about what the discourse conceals and what forms of resistance might be required. In so doing, I contribute to ongoing debates about networked learning, providing a context to explore educational technology as a technology, language and learning nexus.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

2000 Mathematics Subject Classification: 62P99, 68T50

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In global policy documents, the language of Technology-Enhanced Learning (TEL) now firmly structures a perception of educational technology which ‘subsumes’ terms like Networked Learning and e-Learning. Embedded in these three words though is a deterministic, economic assumption that technology has now enhanced learning, and will continue to do so. In a market-driven, capitalist society this is a ‘trouble free’, economically focused discourse which suggests there is no need for further debate about what the use of technology achieves in learning. Yet this raises a problem too: if technology achieves goals for human beings, then in education we are now simply counting on ‘use of technology’ to enhance learning. This closes the door on a necessary and ongoing critical pedagogical conversation that reminds us it is people that design learning, not technology. Furthermore, such discourse provides a vehicle for those with either strong hierarchical, or neoliberal agendas to make simplified claims politically, in the name of technology. This chapter is a reflection on our use of language in the educational technology community through a corpus-based Critical Discourse Analysis (CDA). In analytical examples that are ‘loaded’ with economic expectation, we can notice how the policy discourse of TEL narrows conversational space for learning so that people may struggle to recognise their own subjective being in this language. Through the lens of Lieras’s externality, desubjectivisation and closure (Lieras, 1996) we might examine possible effects of this discourse and seek a more emancipatory approach. A return to discussing Networked Learning is suggested, as a first step towards a more multi-directional conversation than TEL, that acknowledges the interrelatedness of technology, language and learning in people’s practice. Secondly, a reconsideration of how we write policy for educational technology is recommended, with a critical focus on how people learn, rather than on what technology is assumed to enhance.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This study examined the construct validity of the Choices questionnaire that purported to support the theory of Learning Agility. Specifically, Learning Agility attempts to predict an individual's potential performance in new tasks. The construct validity will be measured by examining the convergent/discriminant validity of the Choices Questionnaire against a cognitive ability measure and two personality measures. The Choices Questionnaire did tap a construct that is unique to the cognitive ability and the personality measures, thus suggesting that this measure may have considerable value in personnel selection. This study also examined the relationship of this pew measure to job performance and job promotability. Results of this study found that the Choices Questionnaire predicted job performance and job promotability above and beyond cognitive ability and personality. Data from 107 law enforcement officers, along with two of their co-workers and a supervisor resulted in a correlation of .08 between Learning Agility and cognitive ability. Learning Agility correlated .07 with Learning Goal Orientation and. 17 with Performance Goal Orientation. Correlations with the Big Five Personality factors ranged from −.06 to. 13 with Conscientiousness and Openness to Experience, respectively. Learning Agility correlated .40 with supervisory ratings of job promotability and correlated .3 7 with supervisory ratings of overall job performance. Hierarchical regression analysis found incremental validity for Learning Agility over cognitive ability and the Big Five factors of personality for supervisory ratings of both promotability and overall job performance. A literature review was completed to integrate the Learning Agility construct into a nomological net of personnel selection research. Additionally, practical applications and future research directions are discussed. ^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Subspaces and manifolds are two powerful models for high dimensional signals. Subspaces model linear correlation and are a good fit to signals generated by physical systems, such as frontal images of human faces and multiple sources impinging at an antenna array. Manifolds model sources that are not linearly correlated, but where signals are determined by a small number of parameters. Examples are images of human faces under different poses or expressions, and handwritten digits with varying styles. However, there will always be some degree of model mismatch between the subspace or manifold model and the true statistics of the source. This dissertation exploits subspace and manifold models as prior information in various signal processing and machine learning tasks.

A near-low-rank Gaussian mixture model measures proximity to a union of linear or affine subspaces. This simple model can effectively capture the signal distribution when each class is near a subspace. This dissertation studies how the pairwise geometry between these subspaces affects classification performance. When model mismatch is vanishingly small, the probability of misclassification is determined by the product of the sines of the principal angles between subspaces. When the model mismatch is more significant, the probability of misclassification is determined by the sum of the squares of the sines of the principal angles. Reliability of classification is derived in terms of the distribution of signal energy across principal vectors. Larger principal angles lead to smaller classification error, motivating a linear transform that optimizes principal angles. This linear transformation, termed TRAIT, also preserves some specific features in each class, being complementary to a recently developed Low Rank Transform (LRT). Moreover, when the model mismatch is more significant, TRAIT shows superior performance compared to LRT.

The manifold model enforces a constraint on the freedom of data variation. Learning features that are robust to data variation is very important, especially when the size of the training set is small. A learning machine with large numbers of parameters, e.g., deep neural network, can well describe a very complicated data distribution. However, it is also more likely to be sensitive to small perturbations of the data, and to suffer from suffer from degraded performance when generalizing to unseen (test) data.

From the perspective of complexity of function classes, such a learning machine has a huge capacity (complexity), which tends to overfit. The manifold model provides us with a way of regularizing the learning machine, so as to reduce the generalization error, therefore mitigate overfiting. Two different overfiting-preventing approaches are proposed, one from the perspective of data variation, the other from capacity/complexity control. In the first approach, the learning machine is encouraged to make decisions that vary smoothly for data points in local neighborhoods on the manifold. In the second approach, a graph adjacency matrix is derived for the manifold, and the learned features are encouraged to be aligned with the principal components of this adjacency matrix. Experimental results on benchmark datasets are demonstrated, showing an obvious advantage of the proposed approaches when the training set is small.

Stochastic optimization makes it possible to track a slowly varying subspace underlying streaming data. By approximating local neighborhoods using affine subspaces, a slowly varying manifold can be efficiently tracked as well, even with corrupted and noisy data. The more the local neighborhoods, the better the approximation, but the higher the computational complexity. A multiscale approximation scheme is proposed, where the local approximating subspaces are organized in a tree structure. Splitting and merging of the tree nodes then allows efficient control of the number of neighbourhoods. Deviation (of each datum) from the learned model is estimated, yielding a series of statistics for anomaly detection. This framework extends the classical {\em changepoint detection} technique, which only works for one dimensional signals. Simulations and experiments highlight the robustness and efficacy of the proposed approach in detecting an abrupt change in an otherwise slowly varying low-dimensional manifold.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This dissertation contributes to the rapidly growing empirical research area in the field of operations management. It contains two essays, tackling two different sets of operations management questions which are motivated by and built on field data sets from two very different industries --- air cargo logistics and retailing.

The first essay, based on the data set obtained from a world leading third-party logistics company, develops a novel and general Bayesian hierarchical learning framework for estimating customers' spillover learning, that is, customers' learning about the quality of a service (or product) from their previous experiences with similar yet not identical services. We then apply our model to the data set to study how customers' experiences from shipping on a particular route affect their future decisions about shipping not only on that route, but also on other routes serviced by the same logistics company. We find that customers indeed borrow experiences from similar but different services to update their quality beliefs that determine future purchase decisions. Also, service quality beliefs have a significant impact on their future purchasing decisions. Moreover, customers are risk averse; they are averse to not only experience variability but also belief uncertainty (i.e., customer's uncertainty about their beliefs). Finally, belief uncertainty affects customers' utilities more compared to experience variability.

The second essay is based on a data set obtained from a large Chinese supermarket chain, which contains sales as well as both wholesale and retail prices of un-packaged perishable vegetables. Recognizing the special characteristics of this particularly product category, we develop a structural estimation model in a discrete-continuous choice model framework. Building on this framework, we then study an optimization model for joint pricing and inventory management strategies of multiple products, which aims at improving the company's profit from direct sales and at the same time reducing food waste and thus improving social welfare.

Collectively, the studies in this dissertation provide useful modeling ideas, decision tools, insights, and guidance for firms to utilize vast sales and operations data to devise more effective business strategies.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Bayesian methods offer a flexible and convenient probabilistic learning framework to extract interpretable knowledge from complex and structured data. Such methods can characterize dependencies among multiple levels of hidden variables and share statistical strength across heterogeneous sources. In the first part of this dissertation, we develop two dependent variational inference methods for full posterior approximation in non-conjugate Bayesian models through hierarchical mixture- and copula-based variational proposals, respectively. The proposed methods move beyond the widely used factorized approximation to the posterior and provide generic applicability to a broad class of probabilistic models with minimal model-specific derivations. In the second part of this dissertation, we design probabilistic graphical models to accommodate multimodal data, describe dynamical behaviors and account for task heterogeneity. In particular, the sparse latent factor model is able to reveal common low-dimensional structures from high-dimensional data. We demonstrate the effectiveness of the proposed statistical learning methods on both synthetic and real-world data.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

While molecular and cellular processes are often modeled as stochastic processes, such as Brownian motion, chemical reaction networks and gene regulatory networks, there are few attempts to program a molecular-scale process to physically implement stochastic processes. DNA has been used as a substrate for programming molecular interactions, but its applications are restricted to deterministic functions and unfavorable properties such as slow processing, thermal annealing, aqueous solvents and difficult readout limit them to proof-of-concept purposes. To date, whether there exists a molecular process that can be programmed to implement stochastic processes for practical applications remains unknown.

In this dissertation, a fully specified Resonance Energy Transfer (RET) network between chromophores is accurately fabricated via DNA self-assembly, and the exciton dynamics in the RET network physically implement a stochastic process, specifically a continuous-time Markov chain (CTMC), which has a direct mapping to the physical geometry of the chromophore network. Excited by a light source, a RET network generates random samples in the temporal domain in the form of fluorescence photons which can be detected by a photon detector. The intrinsic sampling distribution of a RET network is derived as a phase-type distribution configured by its CTMC model. The conclusion is that the exciton dynamics in a RET network implement a general and important class of stochastic processes that can be directly and accurately programmed and used for practical applications of photonics and optoelectronics. Different approaches to using RET networks exist with vast potential applications. As an entropy source that can directly generate samples from virtually arbitrary distributions, RET networks can benefit applications that rely on generating random samples such as 1) fluorescent taggants and 2) stochastic computing.

By using RET networks between chromophores to implement fluorescent taggants with temporally coded signatures, the taggant design is not constrained by resolvable dyes and has a significantly larger coding capacity than spectrally or lifetime coded fluorescent taggants. Meanwhile, the taggant detection process becomes highly efficient, and the Maximum Likelihood Estimation (MLE) based taggant identification guarantees high accuracy even with only a few hundred detected photons.

Meanwhile, RET-based sampling units (RSU) can be constructed to accelerate probabilistic algorithms for wide applications in machine learning and data analytics. Because probabilistic algorithms often rely on iteratively sampling from parameterized distributions, they can be inefficient in practice on the deterministic hardware traditional computers use, especially for high-dimensional and complex problems. As an efficient universal sampling unit, the proposed RSU can be integrated into a processor / GPU as specialized functional units or organized as a discrete accelerator to bring substantial speedups and power savings.