851 resultados para capability-based framework


Relevância:

80.00% 80.00%

Publicador:

Resumo:

Spike timing dependent plasticity (STDP) is a phenomenon in which the precise timing of spikes affects the sign and magnitude of changes in synaptic strength. STDP is often interpreted as the comprehensive learning rule for a synapse - the "first law" of synaptic plasticity. This interpretation is made explicit in theoretical models in which the total plasticity produced by complex spike patterns results from a superposition of the effects of all spike pairs. Although such models are appealing for their simplicity, they can fail dramatically. For example, the measured single-spike learning rule between hippocampal CA3 and CA1 pyramidal neurons does not predict the existence of long-term potentiation one of the best-known forms of synaptic plasticity. Layers of complexity have been added to the basic STDP model to repair predictive failures, but they have been outstripped by experimental data. We propose an alternate first law: neural activity triggers changes in key biochemical intermediates, which act as a more direct trigger of plasticity mechanisms. One particularly successful model uses intracellular calcium as the intermediate and can account for many observed properties of bidirectional plasticity. In this formulation, STDP is not itself the basis for explaining other forms of plasticity, but is instead a consequence of changes in the biochemical intermediate, calcium. Eventually a mechanism-based framework for learning rules should include other messengers, discrete change at individual synapses, spread of plasticity among neighboring synapses, and priming of hidden processes that change a synapse's susceptibility to future change. Mechanism-based models provide a rich framework for the computational representation of synaptic plasticity.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

We propose an optimization-based framework to minimize the energy consumption in a sensor network when using an indoor localization system based on the combination of received signal strength (RSS) and pedestrian dead reckoning (PDR). The objective is to find the RSS localization frequency and the number of RSS measurements used at each localization round that jointly minimize the total consumed energy, while ensuring at the same time a desired accuracy in the localization result. The optimization approach leverages practical models to predict the localization error and the overall energy consumption for combined RSS-PDR localization systems. The performance of the proposed strategy is assessed through simulation, showing energy savings with respect to other approaches while guaranteeing a target accuracy.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

We address the problem of developing mechanisms for easily implementing modular extensions to modular (logic) languages. By(language) extensions we refer to different groups of syntactic definitions and translation rules that extend a language. Our use of the concept of modularity in this context is twofold. We would like these extensions to be modular, in the sense above, i.e., we should be able to develop different extensions mostly separately. At the same time, the sources and targets for the extensions are modular languages, i.e., such extensions may take as input sepárate pieces of code and also produce sepárate pieces of code. Dealing with this double requirement involves interesting challenges to ensure that modularity is not broken: first, combinations of extensions (as if they were a single extensión) must be given a precise meaning. Also, the sepárate translation of múltiple sources (as if they were a single source) must be feasible. We present a detailed description of a code expansion-based framework that proposes novel solutions for these problems. We argüe that the approach, while implemented for Ciao, can be adapted for other Prolog-based systems and languages.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Modularity allows the construction of complex designs from simpler, independent units that most of the time can be developed separately. In this paper we are concerned with developing mechanisms for easily implementing modular extensions to modular (logic) languages. By (language) extensions we refer to different groups of syntactic definitions and translation rules that extend a language. Our application of the concept of modularity in this context is twofold. We would like these extensions to be modular, in the above sense, i.e., we should be able to develop different extensions mostly separately. At the same time, the sources and targets for the extensions are modular languages, i.e., such extensions may take as input separate pieces of code and also produce separate pieces of code. Dealing with this double requirement involves interesting challenges to ensure that modularity is not broken: first, combinations of extensions (as if they were a single extension) must be given a precise meaning. Also, the separate translation of multiple sources (as if they were a single source) must be feasible. We present a detailed description of a code expansion-based framework that proposes novel solutions for these problems. We argue that the approach, while implemented for Ciao, can be adapted for other languages and Prolog-based systems.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

With the quick advance of web service technologies, end-users can conduct various on-line tasks, such as shopping on-line. Usually, end-users compose a set of services to accomplish a task, and need to enter values to services to invoke the composite services. Quite often, users re-visit websites and use services to perform re-occurring tasks. The users are required to enter the same information into various web services to accomplish such re-occurring tasks. However, repetitively typing the same information into services is a tedious job for end-users. It can negatively impact user experience when an end-user needs to type the re-occurring information repetitively into web services. Recent studies have proposed several approaches to help users fill in values to services automatically. However, prior studies mainly suffer the following drawbacks: (1) limited support of collecting and analyzing user inputs; (2) poor accuracy of filling values to services; (3) not designed for service composition. To overcome the aforementioned drawbacks, we need maximize the reuse of previous user inputs across services and end-users. In this thesis, we introduce our approaches that prevent end-users from entering the same information into repetitive on-line tasks. More specifically, we improve the process of filling out services in the following 4 aspects: First, we investigate the characteristics of input parameters. We propose an ontology-based approach to automatically categorize parameters and fill values to the categorized input parameters. Second, we propose a comprehensive framework that leverages user contexts and usage patterns into the process of filling values to services. Third, we propose an approach for maximizing the value propagation among services and end-users by linking a set of semantically related parameters together and similar end-users. Last, we propose a ranking-based framework that ranks a list of previous user inputs for an input parameter to save a user from unnecessary data entries. Our framework learns and analyzes interactions of user inputs and input parameters to rank user inputs for input parameters under different contexts.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This paper draws on Matthew's story to illustrate the conflicting discourses of being a boy and being a student. Matthew is 12 years old and in Grade Six, his final year at Banrock Primary ( a K- 6 Australian State School). School is far from a happy place for Matthew - his tearful accounts of his combative relationships with his peers and his teacher highlight his emotional distress. The paper's analytic focus draws attention to some of the ways Matthew's harmful storylines of hegemonic masculinity are made possible through, in particular, his teacher's gendered philosophies and her strategies of individualism and control. In this regard, Matthew's story provides insight into the potentially counterproductive realities of teacher practice in relation to addressing issues of masculinity within the school environment. Against this backdrop, the paper stresses the importance of teachers drawing on a sound research-based framework of gender knowledges that can illuminate how masculinities are constructed, regulated and, indeed, transformed through the power relations of everyday social practice, including teacher practice.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Teacher educators who advocate new learning approaches hope that their graduates will address the needs of digitally and globally sophisticated students. A critical, enquiry-based framework for teaching attempts to unravel many traditional assumptions about learning, assumptions which continue to shape preservice teachers’ practices even through early career years. Evidence in relation to effective take up of New Learning education approaches by graduates is sparse. This paper will explore how three teacher educators attempt to wrestle with ways New Learning frameworks can transform outmoded yet embedded views in beginning teachers. They ask: Can changed approaches be consolidated and mobilised against some of the adverse conditions that predominate in schools? And if this is possible, what support might be required for beginning teachers who are struggling to implement a change process

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Modern business trends such as agile manufacturing and virtual corporations require high levels of flexibility and responsiveness to consumer demand, and require the ability to quickly and efficiently select trading partners. Automated computational techniques for supply chain formation have the potential to provide significant advantages in terms of speed and efficiency over the traditional manual approach to partner selection. Automated supply chain formation is the process of determining the participants within a supply chain and the terms of the exchanges made between these participants. In this thesis we present an automated technique for supply chain formation based upon the min-sum loopy belief propagation algorithm (LBP). LBP is a decentralised and distributed message-passing algorithm which allows participants to share their beliefs about the optimal structure of the supply chain based upon their costs, capabilities and requirements. We propose a novel framework for the application of LBP to the existing state-of-the-art case of the decentralised supply chain formation problem, and extend this framework to allow for application to further novel and established problem cases. Specifically, the contributions made by this thesis are: • A novel framework to allow for the application of LBP to the decentralised supply chain formation scenario investigated using the current state-of-the-art approach. Our experimental analysis indicates that LBP is able to match or outperform this approach for the vast majority of problem instances tested. • A new solution goal for supply chain formation in which economically motivated producers aim to maximise their profits by intelligently altering their profit margins. We propose a rational pricing strategy that allows producers to earn significantly greater profits than a comparable LBP-based profitmaking approach. • An LBP-based framework which allows the algorithm to be used to solve supply chain formation problems in which goods are exchanged in multiple units, a first for a fully decentralised technique. As well as multiple-unit exchanges, we also model in this scenario realistic constraints such as factory capacities and input-to-output ratios. LBP continues to be able to match or outperform an extended version of the existing state-of-the-art approach in this scenario. • Introduction of a dynamic supply chain formation scenario in which participants are able to alter their properties or to enter or leave the process at any time. Our results suggest that LBP is able to deal easily with individual occurences of these alterations and that performance degrades gracefully when they occur in larger numbers.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The main purpose of this research is to develop and deploy an analytical framework for measuring the environmental performance of manufacturing supply chains. This work's theoretical bases combine and reconcile three major areas: supply chain management, environmental management and performance measurement. Researchers have suggested many empirical criteria for green supply chain (GSC) performance measurement and proposed both qualitative and quantitative frameworks. However, these are mainly operational in nature and specific to the focal company. This research develops an innovative GSC performance measurement framework by integrating supply chain processes (supplier relationship management, internal supply chain management and customer relationship management) with organisational decision levels (both strategic and operational). Environmental planning, environmental auditing, management commitment, environmental performance, economic performance and operational performance are the key level constructs. The proposed framework is then applied to three selected manufacturing organisations in the UK. Their GSC performance is measured and benchmarked by using the analytic hierarchy process (AHP), a multiple-attribute decision-making technique. The AHP-based framework offers an effective way to measure and benchmark organisations’ GSC performance. This study has both theoretical and practical implications. Theoretically it contributes holistic constructs for designing a GSC and managing it for sustainability; and practically it helps industry practitioners to measure and improve the environmental performance of their supply chain. © 2013 Copyright Taylor and Francis Group, LLC. CORRIGENDUM DOI 10.1080/09537287.2012.751186 In the article ‘Green supply chain performance measurement using the analytic hierarchy process: a comparative analysis of manufacturing organisations’ by Prasanta Kumar Dey and Walid Cheffi, Production Planning & Control, 10.1080/09537287.2012.666859, a third author is added which was not included in the paper as it originally appeared. The third author is Breno Nunes.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Although the field of nonprofit studies now encompasses a substantial body of literature on the relationship between governmental and nonprofit organizations, the relationship between the business and nonprofit sectors has been less addressed by specialist nonprofit scholars. This Research Note aims to encourage further studies by nonprofit scholars of the business-nonprofit sector relationship. It looks at descriptive evidence to date, proposes a tentative resource-based framework for understanding how nonprofits and business relate to each other in practice and suggests some initial directions for developing a subfield within nonprofit studies. © The Author(s) 2012.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

OpenMI is a widely used standard allowing exchange of data between integrated models, which has mostly been applied to dynamic, deterministic models. Within the FP7 UncertWeb project we are developing mechanisms and tools to support the management of uncertainty in environmental models. In this paper we explore the integration of the UncertWeb framework with OpenMI, to assess the issues that arise when propagating uncertainty in OpenMI model compositions, and the degree of integration possible with UncertWeb tools. In particular we develop an uncertainty-enabled model for a simple Lotka-Volterra system with an interface conforming to the OpenMI standard, exploring uncertainty in the initial predator and prey levels, and the parameters of the model equations. We use the Elicitator tool developed within UncertWeb to identify the initial condition uncertainties, and show how these can be integrated, using UncertML, with simple Monte Carlo propagation mechanisms. The mediators we develop for OpenMI models are generic and produce standard Web services that expose the OpenMI models to a Web based framework. We discuss what further work is needed to allow a more complete system to be developed and show how this might be used practically.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This paper presents a Web-Centric [3] extension to a previously developed glaucoma expert system that will provide access for doctors and patients from any part of the world. Once implemented, this telehealth solution will publish the services of the Glaucoma Expert System on the World Wide Web, allowing patients and doctors to interact with it from their own homes. This web-extension will also allow the expert system itself to be proactive and to send diagnosis alerts to the registered user or doctor and the patient, informing each one of any emergencies, therefore allowing them to take immediate actions. The existing Glaucoma Expert System uses fuzzy logic learning algorithms applied on historical patient data to update and improve its diagnosis rules set. This process, collectively called the learning process, would benefit greatly from a web-based framework that could provide services like patient data transfer and web- based distribution of updated rules [1].

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This study evaluates applicability of E-service quality measurements in the context of online hotel bookings. Data was collected from an online survey of undergraduate college students at two universities in the United States. The Transaction Process-based Framework (eTransQual) conceptualized by Bauer et al. (2006) was adapted, and the dimensionality of e-service quality was identified. The study identified process/reliability as the most important factor influencing overall quality of booking websites.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

With the quick advance of web service technologies, end-users can conduct various on-line tasks, such as shopping on-line. Usually, end-users compose a set of services to accomplish a task, and need to enter values to services to invoke the composite services. Quite often, users re-visit websites and use services to perform re-occurring tasks. The users are required to enter the same information into various web services to accomplish such re-occurring tasks. However, repetitively typing the same information into services is a tedious job for end-users. It can negatively impact user experience when an end-user needs to type the re-occurring information repetitively into web services. Recent studies have proposed several approaches to help users fill in values to services automatically. However, prior studies mainly suffer the following drawbacks: (1) limited support of collecting and analyzing user inputs; (2) poor accuracy of filling values to services; (3) not designed for service composition. To overcome the aforementioned drawbacks, we need maximize the reuse of previous user inputs across services and end-users. In this thesis, we introduce our approaches that prevent end-users from entering the same information into repetitive on-line tasks. More specifically, we improve the process of filling out services in the following 4 aspects: First, we investigate the characteristics of input parameters. We propose an ontology-based approach to automatically categorize parameters and fill values to the categorized input parameters. Second, we propose a comprehensive framework that leverages user contexts and usage patterns into the process of filling values to services. Third, we propose an approach for maximizing the value propagation among services and end-users by linking a set of semantically related parameters together and similar end-users. Last, we propose a ranking-based framework that ranks a list of previous user inputs for an input parameter to save a user from unnecessary data entries. Our framework learns and analyzes interactions of user inputs and input parameters to rank user inputs for input parameters under different contexts.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Understanding the overall catalytic activity trend for rational catalyst design is one of the core goals in heterogeneous catalysis. In the past two decades, the development of density functional theory (DFT) and surface kinetics make it feasible to theoretically evaluate and predict the catalytic activity variation of catalysts within a descriptor-based framework. Thereinto, the concept of the volcano curve, which reveals the general activity trend, usually constitutes the basic foundation of catalyst screening. However, although it is a widely accepted concept in heterogeneous catalysis, its origin lacks a clear physical picture and definite interpretation. Herein, starting with a brief review of the development of the catalyst screening framework, we use a two-step kinetic model to refine and clarify the origin of the volcano curve with a full analytical analysis by integrating the surface kinetics and the results of first-principles calculations. It is mathematically demonstrated that the volcano curve is an essential property in catalysis, which results from the self-poisoning effect accompanying the catalytic adsorption process. Specifically, when adsorption is strong, it is the rapid decrease of surface free sites rather than the augmentation of energy barriers that inhibits the overall reaction rate and results in the volcano curve. Some interesting points and implications in assisting catalyst screening are also discussed based on the kinetic derivation. Moreover, recent applications of the volcano curve for catalyst design in two important photoelectrocatalytic processes (the hydrogen evolution reaction and dye-sensitized solar cells) are also briefly discussed.