976 resultados para location-allocation problem
Resumo:
The waste’s rise is a problem that affects the environment as a whole and we cannot forget about it. A good waste’s management is the key to improve the future prospect, and the waste collection is key within the management activities. To find out the better way to collect wastes leads to a reduction of the social, economic and environmental cost. With the use of the Geographic Information Systems it has been intended to elaborate a methodology which allowed us to identify the most suitable places for the location of the collection containers of the different sorts of the solid urban wastes. Taking into account that different types of wastes exist, not all of them should be managed in the same way. Therefore we have to differentiate between models where we apply efficiency and models where we apply equity for the collection of wastes, bearing in mind the necessities of each waste.
Resumo:
We investigate the performance of dual-hop two-way amplify-and-forward (AF) relaying in the presence of inphase and quadrature-phase imbalance (IQI) at the relay node. In particular, the effective signal-to-interference-plus-noise ratio (SINR) at both sources is derived. These SINRs are used to design an instantaneous power allocation scheme, which maximizes the minimum SINR of the two sources under a total transmit power constraint. The solution to this optimization problem is analytically determined and used to evaluate the outage probability (OP) of the considered two-way AF relaying system. Both analytical and numerical results show that IQI can create fundamental performance limits on two-way relaying, which cannot be avoided by simply improving the channel conditions.
Resumo:
We consider how three firms compete in a Salop location model and how cooperation in location choice by two of these firms affects the outcomes. We con- sider the classical case of linear transportation costs as a two-stage game in which the firms select first a location on a unit circle along which consumers are dispersed evenly, followed by the competitive selection of a price. Standard analysis restricts itself to purely competitive selection of location; instead, we focus on the situation in which two firms collectively decide about location, but price their products competitively after the location choice has been effectuated. We show that such partial coordination of location is beneficial to all firms, since it reduces the number of equilibria significantly and, thereby, the resulting coordination problem. Subsequently, we show that the case of quadratic transportation costs changes the main conclusions only marginally.
Resumo:
Background and problem – As a result of financial crises and the realization of a broader stakeholder network, recent decades have seen an increase in stakeholder demand for non- financial information in corporate reporting. This has led to a situation of information overload where separate financial and sustainability reports have developed in length and complexity interdependent of each other. Integrated reporting has been presented as a solution to this problematic situation. The question is whether the corporate world believe this to be the solution and if the development of corporate reporting is heading in this direction. Purpose - This thesis aims to examine and assess to what extent companies listed on the OMX Stockholm 30 (OMXS30), as per 2016-02-28, comply with the Strategic content element of the <IR> Framework and how this disclosure has developed since the framework’s pilot project and official release by using a self-constructed disclosure index based on its specific items. Methodology – The purpose was fulfilled through an analysis of 104 annual reports comprising 26 companies during the period of 2011-2014. The annual reports were assessed using a self-constructed disclosure index based on the <IR> Framework content element Strategy and Resource Allocation, where one point was given for each disclosed item. Analysis and conclusions – The study found that the OMXS30-listed companies to a large extent complies with the strategic content element of the <IR> Framework and that this compliance has seen a steady growth throughout the researched time span. There is still room for improvement however with a total average framework compliance of 84% for 2014. Although many items are being reported on, there are indications that companies generally miss out on the core values of Integrated reporting.
Resumo:
Deployment of low power basestations within cellular networks can potentially increase both capacity and coverage. However, such deployments require efficient resource allocation schemes for managing interference from the low power and macro basestations that are located within each other’s transmission range. In this dissertation, we propose novel and efficient dynamic resource allocation algorithms in the frequency, time and space domains. We show that the proposed algorithms perform better than the current state-of-art resource management algorithms. In the first part of the dissertation, we propose an interference management solution in the frequency domain. We introduce a distributed frequency allocation scheme that shares frequencies between macro and low power pico basestations, and guarantees a minimum average throughput to users. The scheme seeks to minimize the total number of frequencies needed to honor the minimum throughput requirements. We evaluate our scheme using detailed simulations and show that it performs on par with the centralized optimum allocation. Moreover, our proposed scheme outperforms a static frequency reuse scheme and the centralized optimal partitioning between the macro and picos. In the second part of the dissertation, we propose a time domain solution to the interference problem. We consider the problem of maximizing the alpha-fairness utility over heterogeneous wireless networks (HetNets) by jointly optimizing user association, wherein each user is associated to any one transmission point (TP) in the network, and activation fractions of all TPs. Activation fraction of a TP is the fraction of the frame duration for which it is active, and together these fractions influence the interference seen in the network. To address this joint optimization problem which we show is NP-hard, we propose an alternating optimization based approach wherein the activation fractions and the user association are optimized in an alternating manner. The subproblem of determining the optimal activation fractions is solved using a provably convergent auxiliary function method. On the other hand, the subproblem of determining the user association is solved via a simple combinatorial algorithm. Meaningful performance guarantees are derived in either case. Simulation results over a practical HetNet topology reveal the superior performance of the proposed algorithms and underscore the significant benefits of the joint optimization. In the final part of the dissertation, we propose a space domain solution to the interference problem. We consider the problem of maximizing system utility by optimizing over the set of user and TP pairs in each subframe, where each user can be served by multiple TPs. To address this optimization problem which is NP-hard, we propose a solution scheme based on difference of submodular function optimization approach. We evaluate our scheme using detailed simulations and show that it performs on par with a much more computationally demanding difference of convex function optimization scheme. Moreover, the proposed scheme performs within a reasonable percentage of the optimal solution. We further demonstrate the advantage of the proposed scheme by studying its performance with variation in different network topology parameters.
Resumo:
Resource allocation decisions are made to serve the current emergency without knowing which future emergency will be occurring. Different ordered combinations of emergencies result in different performance outcomes. Even though future decisions can be anticipated with scenarios, previous models follow an assumption that events over a time interval are independent. This dissertation follows an assumption that events are interdependent, because speed reduction and rubbernecking due to an initial incident provoke secondary incidents. The misconception that secondary incidents are not common has resulted in overlooking a look-ahead concept. This dissertation is a pioneer in relaxing the structural assumptions of independency during the assignment of emergency vehicles. When an emergency is detected and a request arrives, an appropriate emergency vehicle is immediately dispatched. We provide tools for quantifying impacts based on fundamentals of incident occurrences through identification, prediction, and interpretation of secondary incidents. A proposed online dispatching model minimizes the cost of moving the next emergency unit, while making the response as close to optimal as possible. Using the look-ahead concept, the online model flexibly re-computes the solution, basing future decisions on present requests. We introduce various online dispatching strategies with visualization of the algorithms, and provide insights on their differences in behavior and solution quality. The experimental evidence indicates that the algorithm works well in practice. After having served a designated request, the available and/or remaining vehicles are relocated to a new base for the next emergency. System costs will be excessive if delay regarding dispatching decisions is ignored when relocating response units. This dissertation presents an integrated method with a principle of beginning with a location phase to manage initial incidents and progressing through a dispatching phase to manage the stochastic occurrence of next incidents. Previous studies used the frequency of independent incidents and ignored scenarios in which two incidents occurred within proximal regions and intervals. The proposed analytical model relaxes the structural assumptions of Poisson process (independent increments) and incorporates evolution of primary and secondary incident probabilities over time. The mathematical model overcomes several limiting assumptions of the previous models, such as no waiting-time, returning rule to original depot, and fixed depot. The temporal locations flexible with look-ahead are compared with current practice that locates units in depots based on Poisson theory. A linearization of the formulation is presented and an efficient heuristic algorithm is implemented to deal with a large-scale problem in real-time.
Resumo:
The optimal capacities and locations of a sequence of landfills are studied, and the interactions between these characteristics are considered. Deciding the capacity of a landfill has some spatial implications since it affects the feasible region for the remaining landfills, and some temporal implications because the capacity determines the lifetime of the landfill and hence the moment of time when the next landfills should be constructed. Some general mathematical properties of the solution are provided and interpreted from an economic point of view. The resulting problem turns out to be non-convex and, therefore, it cannot be solved by conventional optimization techniques. Some global optimization methods are used to solve the problem in a particular case in order to illustrate how the solution depends on the parameter values.
Resumo:
Libraries since their inception 4000 years ago have been in a process of constant change. Although, changes were in slow motion for centuries, in the last decades, academic libraries have been continuously striving to adapt their services to the ever-changing user needs of students and academic staff. In addition, e-content revolution, technological advances, and ever-shrinking budgets have obliged libraries to efficiently allocate their limited resources among collection and services. Unfortunately, this resource allocation is a complex process due to the diversity of data sources and formats required to be analyzed prior to decision-making, as well as the lack of efficient integration methods. The main purpose of this study is to develop an integrated model that supports libraries in making optimal budgeting and resource allocation decisions among their services and collection by means of a holistic analysis. To this end, a combination of several methodologies and structured approaches is conducted. Firstly, a holistic structure and the required toolset to holistically assess academic libraries are proposed to collect and organize the data from an economic point of view. A four-pronged theoretical framework is used in which the library system and collection are analyzed from the perspective of users and internal stakeholders. The first quadrant corresponds to the internal perspective of the library system that is to analyze the library performance, and costs incurred and resources consumed by library services. The second quadrant evaluates the external perspective of the library system; user’s perception about services quality is judged in this quadrant. The third quadrant analyses the external perspective of the library collection that is to evaluate the impact of the current library collection on its users. Eventually, the fourth quadrant evaluates the internal perspective of the library collection; the usage patterns followed to manipulate the library collection are analyzed. With a complete framework for data collection, these data coming from multiple sources and therefore with different formats, need to be integrated and stored in an adequate scheme for decision support. A data warehousing approach is secondly designed and implemented to integrate, process, and store the holistic-based collected data. Ultimately, strategic data stored in the data warehouse are analyzed and implemented for different purposes including the following: 1) Data visualization and reporting is proposed to allow library managers to publish library indicators in a simple and quick manner by using online reporting tools. 2) Sophisticated data analysis is recommended through the use of data mining tools; three data mining techniques are examined in this research study: regression, clustering and classification. These data mining techniques have been applied to the case study in the following manner: predicting the future investment in library development; finding clusters of users that share common interests and similar profiles, but belong to different faculties; and predicting library factors that affect student academic performance by analyzing possible correlations of library usage and academic performance. 3) Input for optimization models, early experiences of developing an optimal resource allocation model to distribute resources among the different processes of a library system are documented in this study. Specifically, the problem of allocating funds for digital collection among divisions of an academic library is addressed. An optimization model for the problem is defined with the objective of maximizing the usage of the digital collection over-all library divisions subject to a single collection budget. By proposing this holistic approach, the research study contributes to knowledge by providing an integrated solution to assist library managers to make economic decisions based on an “as realistic as possible” perspective of the library situation.
Resumo:
The increasing needs for computational power in areas such as weather simulation, genomics or Internet applications have led to sharing of geographically distributed and heterogeneous resources from commercial data centers and scientific institutions. Research in the areas of utility, grid and cloud computing, together with improvements in network and hardware virtualization has resulted in methods to locate and use resources to rapidly provision virtual environments in a flexible manner, while lowering costs for consumers and providers. However, there is still a lack of methodologies to enable efficient and seamless sharing of resources among institutions. In this work, we concentrate in the problem of executing parallel scientific applications across distributed resources belonging to separate organizations. Our approach can be divided in three main points. First, we define and implement an interoperable grid protocol to distribute job workloads among partners with different middleware and execution resources. Second, we research and implement different policies for virtual resource provisioning and job-to-resource allocation, taking advantage of their cooperation to improve execution cost and performance. Third, we explore the consequences of on-demand provisioning and allocation in the problem of site-selection for the execution of parallel workloads, and propose new strategies to reduce job slowdown and overall cost.
Resumo:
With hundreds of millions of users reporting locations and embracing mobile technologies, Location Based Services (LBSs) are raising new challenges. In this dissertation, we address three emerging problems in location services, where geolocation data plays a central role. First, to handle the unprecedented growth of generated geolocation data, existing location services rely on geospatial database systems. However, their inability to leverage combined geographical and textual information in analytical queries (e.g. spatial similarity joins) remains an open problem. To address this, we introduce SpsJoin, a framework for computing spatial set-similarity joins. SpsJoin handles combined similarity queries that involve textual and spatial constraints simultaneously. LBSs use this system to tackle different types of problems, such as deduplication, geolocation enhancement and record linkage. We define the spatial set-similarity join problem in a general case and propose an algorithm for its efficient computation. Our solution utilizes parallel computing with MapReduce to handle scalability issues in large geospatial databases. Second, applications that use geolocation data are seldom concerned with ensuring the privacy of participating users. To motivate participation and address privacy concerns, we propose iSafe, a privacy preserving algorithm for computing safety snapshots of co-located mobile devices as well as geosocial network users. iSafe combines geolocation data extracted from crime datasets and geosocial networks such as Yelp. In order to enhance iSafe's ability to compute safety recommendations, even when crime information is incomplete or sparse, we need to identify relationships between Yelp venues and crime indices at their locations. To achieve this, we use SpsJoin on two datasets (Yelp venues and geolocated businesses) to find venues that have not been reviewed and to further compute the crime indices of their locations. Our results show a statistically significant dependence between location crime indices and Yelp features. Third, review centered LBSs (e.g., Yelp) are increasingly becoming targets of malicious campaigns that aim to bias the public image of represented businesses. Although Yelp actively attempts to detect and filter fraudulent reviews, our experiments showed that Yelp is still vulnerable. Fraudulent LBS information also impacts the ability of iSafe to provide correct safety values. We take steps toward addressing this problem by proposing SpiDeR, an algorithm that takes advantage of the richness of information available in Yelp to detect abnormal review patterns. We propose a fake venue detection solution that applies SpsJoin on Yelp and U.S. housing datasets. We validate the proposed solutions using ground truth data extracted by our experiments and reviews filtered by Yelp.
Resumo:
Many important problems in communication networks, transportation networks, and logistics networks are solved by the minimization of cost functions. In general, these can be complex optimization problems involving many variables. However, physicists noted that in a network, a node variable (such as the amount of resources of the nodes) is connected to a set of link variables (such as the flow connecting the node), and similarly each link variable is connected to a number of (usually two) node variables. This enables one to break the problem into local components, often arriving at distributive algorithms to solve the problems. Compared with centralized algorithms, distributed algorithms have the advantages of lower computational complexity, and lower communication overhead. Since they have a faster response to local changes of the environment, they are especially useful for networks with evolving conditions. This review will cover message-passing algorithms in applications such as resource allocation, transportation networks, facility location, traffic routing, and stability of power grids.
Resumo:
Non-orthogonal multiple access (NOMA) is emerging as a promising multiple access technology for the fifth generation cellular networks to address the fast growing mobile data traffic. It applies superposition coding in transmitters, allowing simultaneous allocation of the same frequency resource to multiple intra-cell users. Successive interference cancellation is used at the receivers to cancel intra-cell interference. User pairing and power allocation (UPPA) is a key design aspect of NOMA. Existing UPPA algorithms are mainly based on exhaustive search method with extensive computation complexity, which can severely affect the NOMA performance. A fast proportional fairness (PF) scheduling based UPPA algorithm is proposed to address the problem. The novel idea is to form user pairs around the users with the highest PF metrics with pre-configured fixed power allocation. Systemlevel simulation results show that the proposed algorithm is significantly faster (seven times faster for the scenario with 20 users) with a negligible throughput loss than the existing exhaustive search algorithm.
Resumo:
Al giorno d'oggi il reinforcement learning ha dimostrato di essere davvero molto efficace nel machine learning in svariati campi, come ad esempio i giochi, il riconoscimento vocale e molti altri. Perciò, abbiamo deciso di applicare il reinforcement learning ai problemi di allocazione, in quanto sono un campo di ricerca non ancora studiato con questa tecnica e perchè questi problemi racchiudono nella loro formulazione un vasto insieme di sotto-problemi con simili caratteristiche, per cui una soluzione per uno di essi si estende ad ognuno di questi sotto-problemi. In questo progetto abbiamo realizzato un applicativo chiamato Service Broker, il quale, attraverso il reinforcement learning, apprende come distribuire l'esecuzione di tasks su dei lavoratori asincroni e distribuiti. L'analogia è quella di un cloud data center, il quale possiede delle risorse interne - possibilmente distribuite nella server farm -, riceve dei tasks dai suoi clienti e li esegue su queste risorse. L'obiettivo dell'applicativo, e quindi del data center, è quello di allocare questi tasks in maniera da minimizzare il costo di esecuzione. Inoltre, al fine di testare gli agenti del reinforcement learning sviluppati è stato creato un environment, un simulatore, che permettesse di concentrarsi nello sviluppo dei componenti necessari agli agenti, invece che doversi anche occupare di eventuali aspetti implementativi necessari in un vero data center, come ad esempio la comunicazione con i vari nodi e i tempi di latenza di quest'ultima. I risultati ottenuti hanno dunque confermato la teoria studiata, riuscendo a ottenere prestazioni migliori di alcuni dei metodi classici per il task allocation.
Resumo:
Over the past years, ray tracing (RT) models popularity has been increasing. From the nineties, RT has been used for field prediction in environment such as indoor and urban environments. Nevertheless, with the advent of new technologies, the channel model has become decidedly more dynamic and to perform RT simulations at each discrete time instant become computationally expensive. In this thesis, a new dynamic ray tracing (DRT) approach is presented in which from a single ray tracing simulation at an initial time t0, through analytical formulas we are able to track the motion of the interaction points. The benefits that this approach bring are that Doppler frequencies and channel prediction can be derived at every time instant, without recurring to multiple RT runs and therefore shortening the computation time. DRT performance was studied on two case studies and the results shows the accuracy and the computational gain that derives from this approach. Another issue that has been addressed in this thesis is the licensed band exhaustion of some frequency bands. To deal with this problem, a novel unselfish spectrum leasing scheme in cognitive radio networks (CRNs) is proposed that offers an energy-efficient solution minimizing the environmental impact of the network. In addition, a network management architecture is introduced and resource allocation is proposed as a constrained sum energy efficiency maximization problem. System simulations demonstrate an increment in the energy efficiency of the primary users’ network compared with previously proposed algorithms.
Biased Random-key Genetic Algorithms For The Winner Determination Problem In Combinatorial Auctions.
Resumo:
Abstract In this paper, we address the problem of picking a subset of bids in a general combinatorial auction so as to maximize the overall profit using the first-price model. This winner determination problem assumes that a single bidding round is held to determine both the winners and prices to be paid. We introduce six variants of biased random-key genetic algorithms for this problem. Three of them use a novel initialization technique that makes use of solutions of intermediate linear programming relaxations of an exact mixed integer-linear programming model as initial chromosomes of the population. An experimental evaluation compares the effectiveness of the proposed algorithms with the standard mixed linear integer programming formulation, a specialized exact algorithm, and the best-performing heuristics proposed for this problem. The proposed algorithms are competitive and offer strong results, mainly for large-scale auctions.