836 resultados para RESOURCE ALLOCATION
Resumo:
Real-world business processes are resource-intensive. In work environments human resources usually multitask, both human and non-human resources are typically shared between tasks, and multiple resources are sometimes necessary to undertake a single task. However, current Business Process Management Systems focus on task-resource allocation in terms of individual human resources only and lack support for a full spectrum of resource classes (e.g., human or non-human, application or non-application, individual or teamwork, schedulable or unschedulable) that could contribute to tasks within a business process. In this paper we develop a conceptual data model of resources that takes into account the various resource classes and their interactions. The resulting conceptual resource model is validated using a real-life healthcare scenario.
Resumo:
The effect of resource management on the building design process directly influences the development cycle time and success of construction projects. This paper presents the information constraint net (ICN) to represent the complex information constraint relations among design activities involved in the building design process. An algorithm is developed to transform the information constraints throughout the ICN into a Petri net model. A resource management model is developed using the ICN to simulate and optimize resource allocation in the design process. An example is provided to justify the proposed model through a simulation analysis of the CPN Tools platform in the detailed structural design. The result demonstrates that the proposed approach can obtain the resource management and optimization needed for shortening the development cycle and optimal allocation of resources.
Resumo:
Farms and rural areas have many specific valuable resources that can be used to create non-agricultural products and services. Most of the research regarding on-farm diversification has hitherto concentrated on business start-up or farm survival strategies. Resource allocation and also financial success have not been the primary focus of investigations as yet. In this study these specific topics were investigated i.e. resource allocation and also the financial success of diversified farms from a farm management perspective. The key question addressed in this dissertation, is how tangible and intangible resources of the diversified farm affect the financial success. This study’s theoretical background deals with resource-based theory, and also certain themes of the theory of learning organisation and other decision-making theories. Two datasets were utilised in this study. First, data were collected by postal survey in 2001 (n = 663). Second, data were collected in a follow-up survey in 2006 (n = 439). Data were analysed using multivariate data analyses and path analyses. The study results reveal that, diversified farms performed differently. Success and resources were linked. Professional and management skills affected other resources, and hence directly or indirectly influenced success per se. In the light of empirical analyses of this study, tangible and intangible resources owned by the diversified farm impacted on its financial success. The findings of this study underline the importance of skills and networks for entrepreneur(s). Practically speaking all respondents of this study used either agricultural resources for non-farm businesses or non-farm resources for agricultural enterprises. To share resources in this way was seen as a pragmatic opportunity recognised by farmers. One of the downsides of diversification might be the phenomenon of over-diversification, which can be defined as the situation in which a farm diversifies beyond its optimal limit. The empirical findings of this study reveal that capital and labour resource constrains did have adverse effects on financial success. The evidence indicates that farms that were capital and labour resource constrained in 2001 were still less profitable than their ‘no problems’ counterparts five years later.
Resumo:
Quality of Service (QoS) is a new issue in cloud-based MapReduce, which is a popular computation model for parallel and distributed processing of big data. QoS guarantee is challenging in a dynamical computation environment due to the fact that a fixed resource allocation may become under-provisioning, which leads to QoS violation, or over-provisioning, which increases unnecessary resource cost. This requires runtime resource scaling to adapt environmental changes for QoS guarantee. Aiming to guarantee the QoS, which is referred as to hard deadline in this work, this paper develops a theory to determine how and when resource is scaled up/down for cloud-based MapReduce. The theory employs a nonlinear transformation to define the problem in a reverse resource space, simplifying the theoretical analysis significantly. Then, theoretical results are presented in three theorems on sufficient conditions for guaranteeing the QoS of cloud-based MapReduce. The superiority and applications of the theory are demonstrated through case studies.
Resumo:
We address the problem of allocating a single divisible good to a number of agents. The agents have concave valuation functions parameterized by a scalar type. The agents report only the type. The goal is to find allocatively efficient, strategy proof, nearly budget balanced mechanisms within the Groves class. Near budget balance is attained by returning as much of the received payments as rebates to agents. Two performance criteria are of interest: the maximum ratio of budget surplus to efficient surplus, and the expected budget surplus, within the class of linear rebate functions. The goal is to minimize them. Assuming that the valuation functions are known, we show that both problems reduce to convex optimization problems, where the convex constraint sets are characterized by a continuum of half-plane constraints parameterized by the vector of reported types. We then propose a randomized relaxation of these problems by sampling constraints. The relaxed problem is a linear programming problem (LP). We then identify the number of samples needed for ``near-feasibility'' of the relaxed constraint set. Under some conditions on the valuation function, we show that value of the approximate LP is close to the optimal value. Simulation results show significant improvements of our proposed method over the Vickrey-Clarke-Groves (VCG) mechanism without rebates. In the special case of indivisible goods, the mechanisms in this paper fall back to those proposed by Moulin, by Guo and Conitzer, and by Gujar and Narahari, without any need for randomization. Extension of the proposed mechanisms to situations when the valuation functions are not known to the central planner are also discussed. Note to Practitioners-Our results will be useful in all resource allocation problems that involve gathering of information privately held by strategic users, where the utilities are any concave function of the allocations, and where the resource planner is not interested in maximizing revenue, but in efficient sharing of the resource. Such situations arise quite often in fair sharing of internet resources, fair sharing of funds across departments within the same parent organization, auctioning of public goods, etc. We study methods to achieve near budget balance by first collecting payments according to the celebrated VCG mechanism, and then returning as much of the collected money as rebates. Our focus on linear rebate functions allows for easy implementation. The resulting convex optimization problem is solved via relaxation to a randomized linear programming problem, for which several efficient solvers exist. This relaxation is enabled by constraint sampling. Keeping practitioners in mind, we identify the number of samples that assures a desired level of ``near-feasibility'' with the desired confidence level. Our methodology will occasionally require subsidy from outside the system. We however demonstrate via simulation that, if the mechanism is repeated several times over independent instances, then past surplus can support the subsidy requirements. We also extend our results to situations where the strategic users' utility functions are not known to the allocating entity, a common situation in the context of internet users and other problems.
Resumo:
The nodes with dynamicity, and management without administrator are key features of mobile ad hoc networks (1VIANETs). Increasing resource requirements of nodes running different applications, scarcity of resources, and node mobility in MANETs are the important issues to be considered in allocation of resources. Moreover, management of limited resources for optimal allocation is a crucial task. In our proposed work we discuss a design of resource allocation protocol and its performance evaluation. The proposed protocol uses both static and mobile agents. The protocol does the distribution and parallelization of message propagation (mobile agent with information) in an efficient way to achieve scalability and speed up message delivery to the nodes in the sectors of the zones of a MANET. The protocol functionality has been simulated using Java Agent Development Environment (JADE) Framework for agent generation, migration and communication. A mobile agent migrates from central resource rich node with message and navigate autonomously in the zone of network until the boundary node. With the performance evaluation, it has been concluded that the proposed protocol consumes much less time to allocate the required resources to the nodes under requirement, utilize less network resources and increase the network scalability. (C) 2015 Elsevier B.V. All rights reserved.
Resumo:
Emerging configurable infrastructures such as large-scale overlays and grids, distributed testbeds, and sensor networks comprise diverse sets of available computing resources (e.g., CPU and OS capabilities and memory constraints) and network conditions (e.g., link delay, bandwidth, loss rate, and jitter) whose characteristics are both complex and time-varying. At the same time, distributed applications to be deployed on these infrastructures exhibit increasingly complex constraints and requirements on resources they wish to utilize. Examples include selecting nodes and links to schedule an overlay multicast file transfer across the Grid, or embedding a network experiment with specific resource constraints in a distributed testbed such as PlanetLab. Thus, a common problem facing the efficient deployment of distributed applications on these infrastructures is that of "mapping" application-level requirements onto the network in such a manner that the requirements of the application are realized, assuming that the underlying characteristics of the network are known. We refer to this problem as the network embedding problem. In this paper, we propose a new approach to tackle this combinatorially-hard problem. Thanks to a number of heuristics, our approach greatly improves performance and scalability over previously existing techniques. It does so by pruning large portions of the search space without overlooking any valid embedding. We present a construction that allows a compact representation of candidate embeddings, which is maintained by carefully controlling the order via which candidate mappings are inserted and invalid mappings are removed. We present an implementation of our proposed technique, which we call NETEMBED – a service that identify feasible mappings of a virtual network configuration (the query network) to an existing real infrastructure or testbed (the hosting network). We present results of extensive performance evaluation experiments of NETEMBED using several combinations of real and synthetic network topologies. Our results show that our NETEMBED service is quite effective in identifying one (or all) possible embeddings for quite sizable queries and hosting networks – much larger than what any of the existing techniques or services are able to handle.
Inclusive education policy, the general allocation model and dilemmas of practice in primary schools
Resumo:
Background: Inclusive education is central to contemporary discourse internationally reflecting societies’ wider commitment to social inclusion. Education has witnessed transforming approaches that have created differing distributions of power, resource allocation and accountability. Multiple actors are being forced to consider changes to how key services and supports are organised. This research constitutes a case study situated within this broader social service dilemma of how to distribute finite resources equitably to meet individual need, while advancing inclusion. It focuses on the national directive with regard to inclusive educational practice for primary schools, Department of Education and Science Special Education Circular 02/05, which introduced the General Allocation Model (GAM) within the legislative context of the Education of Persons with Special Educational Needs (EPSEN) Act (Government of Ireland, 2004). This research could help to inform policy with ‘facts about what is happening on the ground’ (Quinn, 2013). Research Aims: The research set out to unearth the assumptions and definitions embedded within the policy document, to analyse how those who are at the coalface of policy, and who interface with multiple interests in primary schools, understand the GAM and respond to it, and to investigate its effects on students and their education. It examines student outcomes in the primary schools where the GAM was investigated. Methods and Sample The post-structural study acknowledges the importance of policy analysis which explicitly links the ‘bigger worlds’ of global and national policy contexts to the ‘smaller worlds’ of policies and practices within schools and classrooms. This study insists upon taking the detail seriously (Ozga, 1990). A mixed methods approach to data collection and analysis is applied. In order to secure the perspectives of key stakeholders, semi-structured interviews were conducted with primary school principals, class teachers and learning support/resource teachers (n=14) in three distinct mainstream, non-DEIS schools. Data from the schools and their environs provided a profile of students. The researcher then used the Pobal Maps Facility (available at www.pobal.ie) to identify the Small Area (SA) in which each student resides, and to assign values to each address based on the Pobal HP Deprivation Index (Haase and Pratschke, 2012). Analysis of the datasets, guided by the conceptual framework of the policy cycle (Ball, 1994), revealed a number of significant themes. Results: Data illustrate that the main model to support student need is withdrawal from the classroom under policy that espouses inclusion. Quantitative data, in particular, highlighted an association between segregated practice and lower socioeconomic status (LSES) backgrounds of students. Up to 83% of the students in special education programmes are from lower socio-economic status (LSES) backgrounds. In some schools 94% of students from LSES backgrounds are withdrawn from classrooms daily for special education. While the internal processes of schooling are not solely to blame for class inequalities, this study reveals the power of professionals to order children in school, which has implications for segregated special education practice. Such agency on the part of key actors in the context of practice relates to ‘local constructions of dis/ability’, which is influenced by teacher habitus (Bourdieu, 1984). The researcher contends that inclusive education has not resulted in positive outcomes for students from LSES backgrounds because it is built on faulty assumptions that focus on a psycho-medical perspective of dis/ability, that is, placement decisions do not consider the intersectionality of dis/ability with class or culture. This study argues that the student need for support is better understood as ‘home/school discontinuity’ not ‘disability’. Moreover, the study unearths the power of some parents to use social and cultural capital to ensure eligibility to enhanced resources. Therefore, a hierarchical system has developed in mainstream schools as a result of funding models to support need in inclusive settings. Furthermore, all schools in the study are ‘ordinary’ schools yet participants acknowledged that some schools are more ‘advantaged’, which may suggest that ‘ordinary’ schools serve to ‘bury class’ (Reay, 2010) as a key marker in allocating resources. The research suggests that general allocation models of funding to meet the needs of students demands a systematic approach grounded in reallocating funds from where they have less benefit to where they have more. The calculation of the composite Haase Value in respect of the student cohort in receipt of special education support adopted for this study could be usefully applied at a national level to ensure that the greatest level of support is targeted at greatest need. Conclusion: In summary, the study reveals that existing structures constrain and enable agents, whose interactions produce intended and unintended consequences. The study suggests that policy should be viewed as a continuous and evolving cycle (Ball, 1994) where actors in each of the social contexts have a shared responsibility in the evolution of education that is equitable, excellent and inclusive.
Resumo:
Computionally efficient sequential learning algorithms are developed for direct-link resource-allocating networks (DRANs). These are achieved by decomposing existing recursive training algorithms on a layer by layer and neuron by neuron basis. This allows network weights to be updated in an efficient parallel manner and facilitates the implementation of minimal update extensions that yield a significant reduction in computation load per iteration compared to existing sequential learning methods employed in resource-allocation network (RAN) and minimal RAN (MRAN) approaches. The new algorithms, which also incorporate a pruning strategy to control network growth, are evaluated on three different system identification benchmark problems and shown to outperform existing methods both in terms of training error convergence and computational efficiency. (c) 2005 Elsevier B.V. All rights reserved.
Resumo:
This article introduces a resource allocation solution capable of handling mixed media applications within the constraints of a 60 GHz wireless network. The challenges of multimedia wireless transmission include high bandwidth requirements, delay intolerance and wireless channel availability. A new Channel Time Allocation Particle Swarm Optimization (CTA-PSO) is proposed to solve the network utility maximization (NUM) resource allocation problem. CTA-PSO optimizes the time allocated to each device in the network in order to maximize the Quality of Service (QoS) experienced by each user. CTA-PSO introduces network-linked swarm size, an increased diversity function and a learning method based on the personal best, Pbest, results of the swarm. These additional developments to the PSO produce improved convergence speed with respect to Adaptive PSO while maintaining the QoS improvement of the NUM. Specifically, CTA-PSO supports applications described by both convex and non-convex utility functions. The multimedia resource allocation solution presented in this article provides a practical solution for real-time wireless networks.
Resumo:
We consider the uplink of massive multicell multiple-input multiple-output systems, where the base stations (BSs), equipped with massive arrays, serve simultaneously several terminals in the same frequency band. We assume that the BS estimates the channel from uplink training, and then uses the maximum ratio combining technique to detect the signals transmitted from all terminals in its own cell. We propose an optimal resource allocation scheme which jointly selects the training duration, training signal power, and data signal power in order to maximize the sum spectral efficiency, for a given total energy budget spent in a coherence interval. Numerical results verify the benefits of the optimal resource allocation scheme. Furthermore, we show that more training signal power should be used at low signal-to-noise ratio (SNRs), and vice versa at high SNRs. Interestingly, for the entire SNR regime, the optimal training duration is equal to the number of terminals.
Resumo:
This thesis contributes to the advancement of Fiber-Wireless (FiWi) access technologies, through the development of algorithms for resource allocation and energy efficient routing. FiWi access networks use both optical and wireless/cellular technologies to provide high bandwidth and ubiquity, required by users and current high demanding services. FiWi access technologies are divided in two parts. In one of the parts, fiber is brought from the central office to near the users, while in the other part wireless routers or base stations take over and provide Internet access to users. Many technologies can be used at both the optical and wireless parts, which lead to different integration and optimization problems to be solved. In this thesis, the focus will be on FiWi access networks that use a passive optical network at the optical section and a wireless mesh network at the wireless section. In such networks, two important aspects that influence network performance are: allocation of resources and traffic routing throughout the mesh section. In this thesis, both problems are addressed. A fair bandwidth allocation algorithm is developed, which provides fairness in terms of bandwidth and in terms of experienced delays among all users. As for routing, an energy efficient routing algorithm is proposed that optimizes sleeping and productive periods throughout the wireless and optical sections. To develop the stated algorithms, game theory and networks formation theory were used. These are powerful mathematical tools that can be used to solve problems involving agents with conflicting interests. Since, usually, these tools are not common knowledge, a brief survey on game theory and network formation theory is provided to explain the concepts that are used throughout the thesis. As such, this thesis also serves as a showcase on the use of game theory and network formation theory to develop new algorithms.
Resumo:
Open access philosophy applied by regulatory agencies may lead to a scenario where captive consumers will solely face the responsibility on distribution network's losses even with Independent Energy Producers (also known as Distributed Generation) and Independent Energy Consumers connected to the system. This work proposes the utilization of a loss allocation method in distribution systems where open access is allowed, in which cross-subsidies, that appear due to the influence the generators have over the system losses, are minimized. Thus, guaranteeing to some extent the efficiency and transparency of the economic signals of the market. Results obtained through the Zbus loss allocation method adapted for distribution networks are processed in such a way that the corresponding allocation to the generation buses is divided among the consumer buses, while still considering consumers spatial characteristics. © 2007 IEEE.
Resumo:
This work presents exact algorithms for the Resource Allocation and Cyclic Scheduling Problems (RA&CSPs). Cyclic Scheduling Problems arise in a number of application areas, such as in hoist scheduling, mass production, compiler design (implementing scheduling loops on parallel architectures), software pipelining, and in embedded system design. The RA&CS problem concerns time and resource assignment to a set of activities, to be indefinitely repeated, subject to precedence and resource capacity constraints. In this work we present two constraint programming frameworks facing two different types of cyclic problems. In first instance, we consider the disjunctive RA&CSP, where the allocation problem considers unary resources. Instances are described through the Synchronous Data-flow (SDF) Model of Computation. The key problem of finding a maximum-throughput allocation and scheduling of Synchronous Data-Flow graphs onto a multi-core architecture is NP-hard and has been traditionally solved by means of heuristic (incomplete) algorithms. We propose an exact (complete) algorithm for the computation of a maximum-throughput mapping of applications specified as SDFG onto multi-core architectures. Results show that the approach can handle realistic instances in terms of size and complexity. Next, we tackle the Cyclic Resource-Constrained Scheduling Problem (i.e. CRCSP). We propose a Constraint Programming approach based on modular arithmetic: in particular, we introduce a modular precedence constraint and a global cumulative constraint along with their filtering algorithms. Many traditional approaches to cyclic scheduling operate by fixing the period value and then solving a linear problem in a generate-and-test fashion. Conversely, our technique is based on a non-linear model and tackles the problem as a whole: the period value is inferred from the scheduling decisions. The proposed approaches have been tested on a number of non-trivial synthetic instances and on a set of realistic industrial instances achieving good results on practical size problem.
Resumo:
Dynamic spectrum access (DSA) aims at utilizing spectral opportunities both in time and frequency domains at any given location, which arise due to variations in spectrum usage. Recently, Cognitive radios (CRs) have been proposed as a means of implementing DSA. In this work we focus on the aspect of resource management in overlaid CRNs. We formulate resource allocation strategies for cognitive radio networks (CRNs) as mathematical optimization problems. Specifically, we focus on two key problems in resource management: Sum Rate Maximization and Maximization of Number of Admitted Users. Since both the above mentioned problems are NP hard due to presence of binary assignment variables, we propose novel graph based algorithms to optimally solve these problems. Further, we analyze the impact of location awareness on network performance of CRNs by considering three cases: Full location Aware, Partial location Aware and Non location Aware. Our results clearly show that location awareness has significant impact on performance of overlaid CRNs and leads to increase in spectrum utilization effciency.