961 resultados para Set covering theory
Resumo:
Measuring Job Openings: Evidence from Swedish Plant Level Data. In modern macroeconomic models “job openings'' are a key component. Thus, when taking these models to the data we need an empirical counterpart to the theoretical concept of job openings. To achieve this, the literature relies on job vacancies measured either in survey or register data. Insofar as this concept captures the concept of job openings well we should see a tight relationship between vacancies and subsequent hires on the micro level. To investigate this, I analyze a new data set of Swedish hires and job vacancies on the plant level covering the period 2001-2012. I find that vacancies contain little power in predicting hires over and above (i) whether the number of vacancies is positive and (ii) plant size. Building on this, I propose an alternative measure of job openings in the economy. This measure (i) better predicts hiring at the plant level and (ii) provides a better fitting aggregate matching function vis-à-vis the traditional vacancy measure. Firm Level Evidence from Two Vacancy Measures. Using firm level survey and register data for both Sweden and Denmark we show systematic mis-measurement in both vacancy measures. While the register-based measure on the aggregate constitutes a quarter of the survey-based measure, the latter is not a super-set of the former. To obtain the full set of unique vacancies in these two databases, the number of survey vacancies should be multiplied by approximately 1.2. Importantly, this adjustment factor varies over time and across firm characteristics. Our findings have implications for both the search-matching literature and policy analysis based on vacancy measures: observed changes in vacancies can be an outcome of changes in mis-measurement, and are not necessarily changes in the actual number of vacancies. Swedish Unemployment Dynamics. We study the contribution of different labor market flows to business cycle variations in unemployment in the context of a dual labor market. To this end, we develop a decomposition method that allows for a distinction between permanent and temporary employment. We also allow for slow convergence to steady state which is characteristic of European labor markets. We apply the method to a new Swedish data set covering the period 1987-2012 and show that the relative contributions of inflows and outflows to/from unemployment are roughly 60/30. The remaining 10\% are due to flows not involving unemployment. Even though temporary contracts only cover 9-11\% of the working age population, variations in flows involving temporary contracts account for 44\% of the variation in unemployment. We also show that the importance of flows involving temporary contracts is likely to be understated if one does not account for non-steady state dynamics. The New Keynesian Transmission Mechanism: A Heterogeneous-Agent Perspective. We argue that a 2-agent version of the standard New Keynesian model---where a ``worker'' receives only labor income and a “capitalist'' only profit income---offers insights about how income inequality affects the monetary transmission mechanism. Under rigid prices, monetary policy affects the distribution of consumption, but it has no effect on output as workers choose not to change their hours worked in response to wage movements. In the corresponding representative-agent model, in contrast, hours do rise after a monetary policy loosening due to a wealth effect on labor supply: profits fall, thus reducing the representative worker's income. If wages are rigid too, however, the monetary transmission mechanism is active and resembles that in the corresponding representative-agent model. Here, workers are not on their labor supply curve and hence respond passively to demand, and profits are procyclical.
Resumo:
This article examines variations in local input linkages in foreign transnational corporations in Malaysia. The extent to which transnational corporations foster such linkages, particularly in a developing host economy, has become an important issue for policy makers and others concerned with the long-term benefits associated with foreign direct investment. This article employs a unique data set, covering inward investors in the electrical and electronics industry, and analyzes in detail the determinants of variations in local input uses. The article develops a model of local input linkages, based on a transaction-cost framework using firm-specific factors, such as nationality of ownership, the age of the plant and its technology, and the extent to which firms employ locally recruited managers and engineers. In addition, the impacts of various policy measures on local input levels are discussed, and also the importance of the original motivation for investing in Malaysia. The article demonstrates that policy initiatives that target particular outcomes, such as stimulating exports or technology transfer, will result in a greater beneficial impact on the host country economy than more generic subsidies.
Resumo:
Bármennyire szeretne is egy bank (vállalat, biztosító) csak az üzletre koncentrálni, nem térhet ki a pénzügyi (hitel-, piaci, operációs, egyéb) kockázatok elől, amelyeket mérnie és fedeznie kell. A teljes fedezés vagy nagyon költséges, vagy nem is lehetséges, így a csőd elkerülésre minden gazdálkodó egységnek tartania kell valamennyi kockázatmentes, likvid tőkét. Koherens kockázatmérésre van szükség: az allokált tőkének tükröznie kell a kockázatokat - azonban még akkor is felmerül elosztási probléma, ha jól tudjuk mérni azokat. A diverzifikációs hatásoknak köszönhetően egy portfólió teljes kockázata általában kisebb, mint a portfóliót alkotó alportfóliók kockázatának összege. A koherens tőkeallokáció során azzal a kérdéssel kell foglalkoznunk, hogy mennyi tőkét osszunk az alportfóliókra, vagyis hogyan osszuk el „korrekt” módon a diverzifikáció előnyeit. Így megkapjuk az eszközök kockázathoz való hozzájárulását. A tanulmányban játékelmélet alkalmazásával, összetett opciós példákon keresztül bemutatjuk a kockázatok következetes mérését és felosztását, felhívjuk a figyelmet a következetlenségek veszélyeire, valamint megvizsgáljuk, hogy a gyakorlatban alkalmazott kockázatmérési módszerek [különösen a kockáztatott érték (VaR)] mennyire felelnek meg az elmélet által szabott követelményeknek. ____________________ However much a bank (or company or insurance provider) concentrates only on business, it cannot avoid financial (credit, market, operational or other) risks that need to be measured and covered. Total cover is either very expensive or not even possible, so that every business unit has to hold some risk-free liquid capital to avoid insolvency. What it needs is coherent risk measurement: the capital allocated has to match the risks, but even if the risks are measured well, distribution problems can still arise. Thanks to diversification effects, the total risk of a portfolio is less than the sum of the risks of its sub-portfolios. Coherent capital allocation entails addressing the question of how much capital to divide among the sub-portfolios, or how to distribute ‘correctly’ the advantages of diversification. This yields the contribution of the assets to the risk. The study employs game theory and examples of compound options to demonstrate coherent measurement and distribution of risks. Attention is drawn to the dangers of inconsistencies. The authors examine how far the methods of risk measurement applied in practice (notably VaR—value at risk) meet the requirements set in theory.
Resumo:
We present new Holocene century to millennial-scale proxies for the well-dated piston core MD99-2269 from Húnaflóadjúp on the North Iceland Shelf. The core is located in 365 mwd and lies close to the fluctuating boundary between Atlantic and Arctic/Polar waters. The proxies are: alkenone-based SST°C, and Mg/Ca SST°C estimates and stable d13C and d18O values on planktonic and benthic foraminifera. The data were converted to 60 yr equi-spaced time-series. Significant trends in the data were extracted using Singular Spectrum Analysis and these accounted for between 50% and 70% of the variance. A comparison between these data with previously published climate proxies from MD99-2269 was carried out on a data set which consisted of 14-variable data set covering the interval 400-9200 cal yr BP at 100 yr time steps. This analysis indicated that the 1st two PC axes accounted for 57% of the variability with high loadings clustering primarily into "nutrient" and "temperature" proxies. Clustering on the 100 yr time-series indicated major changes in environment at ~6350 and ~3450 cal yr BP, which define early, mid- and late Holocene climatic intervals. We argue that a pervasive freshwater cap during the early Holocene resulted in warm SST°s, a stratified water column, and a depleted nutrient supply. The loss of the freshwater layer in the mid-Holocene resulted in high carbonate production, and the late Holocene/neoglacial interval was marked by significantly more variable sea surface conditions.
Resumo:
Acknowledgements The authors acknowledge the projects supported by the National Basic Research Program of China (973 Project)(No. 2015CB057405) and the National Natural Science Foundation of China (No. 11372082) and the State Scholarship Fund of CSC. DW thanks for the hospitality of the University of Aberdeen.
Resumo:
This paper compares two linear programming (LP) models for shift scheduling in services where homogeneously-skilled employees are available at limited times. Although both models are based on set covering approaches, one explicitly matches employees to shifts, while the other imposes this matching implicitly. Each model is used in three forms—one with complete, another with very limited meal break placement flexibility, and a third without meal breaks—to provide initial schedules to a completion/improvement heuristic. The term completion/improvement heuristic is used to describe a construction/ improvement heuristic operating on a starting schedule. On 80 test problems varying widely in scheduling flexibility, employee staffing requirements, and employee availability characteristics, all six LP-based procedures generated lower cost schedules than a comparison from-scratch construction/improvement heuristic. This heuristic, which perpetually maintains an explicit matching of employees to shifts, consists of three phases which add, drop, and modify shifts. In terms of schedule cost, schedule generation time, and model size, the procedures based on the implicit model performed better, as a group, than those based on the explicit model. The LP model with complete break placement flexibility and implicitly matching employees to shifts generated schedules costing 6.7% less than those developed by the from-scratch heuristic.
Resumo:
The quality of a heuristic solution to a NP-hard combinatorial problem is hard to assess. A few studies have advocated and tested statistical bounds as a method for assessment. These studies indicate that statistical bounds are superior to the more widely known and used deterministic bounds. However, the previous studies have been limited to a few metaheuristics and combinatorial problems and, hence, the general performance of statistical bounds in combinatorial optimization remains an open question. This work complements the existing literature on statistical bounds by testing them on the metaheuristic Greedy Randomized Adaptive Search Procedures (GRASP) and four combinatorial problems. Our findings confirm previous results that statistical bounds are reliable for the p-median problem, while we note that they also seem reliable for the set covering problem. For the quadratic assignment problem, the statistical bounds has previously been found reliable when obtained from the Genetic algorithm whereas in this work they found less reliable. Finally, we provide statistical bounds to four 2-path network design problem instances for which the optimum is currently unknown.
Resumo:
The continuous advancement in computing, together with the decline in its cost, has resulted in technology becoming ubiquitous (Arbaugh, 2008, Gros, 2007). Technology is growing and is part of our lives in almost every respect, including the way we learn. Technology helps to collapse time and space in learning. For example, technology allows learners to engage with their instructors synchronously, in real time and also asynchronously, by enabling sessions to be recorded. Space and distance is no longer an issue provided there is adequate bandwidth, which determines the most appropriate format such text, audio or video. Technology has revolutionised the way learners learn; courses are designed; and ‘lessons’ are delivered, and continues to do so. The learning process can be made vastly more efficient as learners have knowledge at their fingertips, and unfamiliar concepts can be easily searched and an explanation found in seconds. Technology has also enabled learning to be more flexible, as learners can learn anywhere; at any time; and using different formats, e.g. text or audio. From the perspective of the instructors and L&D providers, technology offers these same advantages, plus easy scalability. Administratively, preparatory work can be undertaken more quickly even whilst student numbers grow. Learners from far and new locations can be easily accommodated. In addition, many technologies can be easily scaled to accommodate new functionality and/ or other new technologies. ‘Designing and Developing Digital and Blended Learning Solutions’ (5DBS), has been developed to recognise the growing importance of technology in L&D. This unit contains four learning outcomes and two assessment criteria, which is the same for all other units, besides Learning Outcome 3 which has three assessment criteria. The four learning outcomes in this unit are: • Learning Outcome 1: Understand current digital technologies and their contribution to learning and development solutions; • Learning Outcome 2: Be able to design blended learning solutions that make appropriate use of new technologies alongside more traditional approaches; • Learning Outcome 3: Know about the processes involved in designing and developing digital learning content efficiently and what makes for engaging and effective digital learning content; • Learning Outcome 4: Understand the issues involved in the successful implementation of digital and blended learning solutions. Each learning outcome is an individual chapter and each assessment unit is allocated its own sections within the respective chapters. This first chapter addresses the first learning outcome, which has two assessment criteria: summarise the range of currently available learning technologies; critically assess a learning requirement to determine the contribution that could be made through the use of learning technologies. The introduction to chapter one is in Section 1.0. Chapter 2 discusses the design of blended learning solutions in consideration of how digital learning technologies may support face-to-face and online delivery. Three learning theory sets: behaviourism; cognitivism; constructivism, are introduced, and the implication of each set of theory on instructional design for blended learning discussed. Chapter 3 centres on how relevant digital learning content may be created. This chapter includes a review of the key roles, tools and processes that are involved in developing digital learning content. Finally, Chapter 4 concerns delivery and implementation of digital and blended learning solutions. This chapter surveys the key formats and models used to inform the configuration of virtual learning environment software platforms. In addition, various software technologies which may be important in creating a VLE ecosystem that helps to enhance the learning experience, are outlined. We introduce the notion of personal learning environment (PLE), which has emerged from the democratisation of learning. We also review the roles, tools, standards and processes that L&D practitioners need to consider within a delivery and implementation of digital and blended learning solution.
Resumo:
A Bayesian optimisation algorithm for a nurse scheduling problem is presented, which involves choosing a suitable scheduling rule from a set for each nurse's assignment. When a human scheduler works, he normally builds a schedule systematically following a set of rules. After much practice, the scheduler gradually masters the knowledge of which solution parts go well with others. He can identify good parts and is aware of the solution quality even if the scheduling process is not yet completed, thus having the ability to finish a schedule by using flexible, rather than fixed, rules. In this paper, we design a more human-like scheduling algorithm, by using a Bayesian optimisation algorithm to implement explicit learning from past solutions. A nurse scheduling problem from a UK hospital is used for testing. Unlike our previous work that used Genetic Algorithms to implement implicit learning [1], the learning in the proposed algorithm is explicit, i.e. we identify and mix building blocks directly. The Bayesian optimisation algorithm is applied to implement such explicit learning by building a Bayesian network of the joint distribution of solutions. The conditional probability of each variable in the network is computed according to an initial set of promising solutions. Subsequently, each new instance for each variable is generated by using the corresponding conditional probabilities, until all variables have been generated, i.e. in our case, new rule strings have been obtained. Sets of rule strings are generated in this way, some of which will replace previous strings based on fitness. If stopping conditions are not met, the conditional probabilities for all nodes in the Bayesian network are updated again using the current set of promising rule strings. For clarity, consider the following toy example of scheduling five nurses with two rules (1: random allocation, 2: allocate nurse to low-cost shifts). In the beginning of the search, the probabilities of choosing rule 1 or 2 for each nurse is equal, i.e. 50%. After a few iterations, due to the selection pressure and reinforcement learning, we experience two solution pathways: Because pure low-cost or random allocation produces low quality solutions, either rule 1 is used for the first 2-3 nurses and rule 2 on remainder or vice versa. In essence, Bayesian network learns 'use rule 2 after 2-3x using rule 1' or vice versa. It should be noted that for our and most other scheduling problems, the structure of the network model is known and all variables are fully observed. In this case, the goal of learning is to find the rule values that maximize the likelihood of the training data. Thus, learning can amount to 'counting' in the case of multinomial distributions. For our problem, we use our rules: Random, Cheapest Cost, Best Cover and Balance of Cost and Cover. In more detail, the steps of our Bayesian optimisation algorithm for nurse scheduling are: 1. Set t = 0, and generate an initial population P(0) at random; 2. Use roulette-wheel selection to choose a set of promising rule strings S(t) from P(t); 3. Compute conditional probabilities of each node according to this set of promising solutions; 4. Assign each nurse using roulette-wheel selection based on the rules' conditional probabilities. A set of new rule strings O(t) will be generated in this way; 5. Create a new population P(t+1) by replacing some rule strings from P(t) with O(t), and set t = t+1; 6. If the termination conditions are not met (we use 2000 generations), go to step 2. Computational results from 52 real data instances demonstrate the success of this approach. They also suggest that the learning mechanism in the proposed approach might be suitable for other scheduling problems. Another direction for further research is to see if there is a good constructing sequence for individual data instances, given a fixed nurse scheduling order. If so, the good patterns could be recognized and then extracted as new domain knowledge. Thus, by using this extracted knowledge, we can assign specific rules to the corresponding nurses beforehand, and only schedule the remaining nurses with all available rules, making it possible to reduce the solution space. Acknowledgements The work was funded by the UK Government's major funding agency, Engineering and Physical Sciences Research Council (EPSRC), under grand GR/R92899/01. References [1] Aickelin U, "An Indirect Genetic Algorithm for Set Covering Problems", Journal of the Operational Research Society, 53(10): 1118-1126,
Resumo:
A Bayesian optimisation algorithm for a nurse scheduling problem is presented, which involves choosing a suitable scheduling rule from a set for each nurse's assignment. When a human scheduler works, he normally builds a schedule systematically following a set of rules. After much practice, the scheduler gradually masters the knowledge of which solution parts go well with others. He can identify good parts and is aware of the solution quality even if the scheduling process is not yet completed, thus having the ability to finish a schedule by using flexible, rather than fixed, rules. In this paper, we design a more human-like scheduling algorithm, by using a Bayesian optimisation algorithm to implement explicit learning from past solutions. A nurse scheduling problem from a UK hospital is used for testing. Unlike our previous work that used Genetic Algorithms to implement implicit learning [1], the learning in the proposed algorithm is explicit, i.e. we identify and mix building blocks directly. The Bayesian optimisation algorithm is applied to implement such explicit learning by building a Bayesian network of the joint distribution of solutions. The conditional probability of each variable in the network is computed according to an initial set of promising solutions. Subsequently, each new instance for each variable is generated by using the corresponding conditional probabilities, until all variables have been generated, i.e. in our case, new rule strings have been obtained. Sets of rule strings are generated in this way, some of which will replace previous strings based on fitness. If stopping conditions are not met, the conditional probabilities for all nodes in the Bayesian network are updated again using the current set of promising rule strings. For clarity, consider the following toy example of scheduling five nurses with two rules (1: random allocation, 2: allocate nurse to low-cost shifts). In the beginning of the search, the probabilities of choosing rule 1 or 2 for each nurse is equal, i.e. 50%. After a few iterations, due to the selection pressure and reinforcement learning, we experience two solution pathways: Because pure low-cost or random allocation produces low quality solutions, either rule 1 is used for the first 2-3 nurses and rule 2 on remainder or vice versa. In essence, Bayesian network learns 'use rule 2 after 2-3x using rule 1' or vice versa. It should be noted that for our and most other scheduling problems, the structure of the network model is known and all variables are fully observed. In this case, the goal of learning is to find the rule values that maximize the likelihood of the training data. Thus, learning can amount to 'counting' in the case of multinomial distributions. For our problem, we use our rules: Random, Cheapest Cost, Best Cover and Balance of Cost and Cover. In more detail, the steps of our Bayesian optimisation algorithm for nurse scheduling are: 1. Set t = 0, and generate an initial population P(0) at random; 2. Use roulette-wheel selection to choose a set of promising rule strings S(t) from P(t); 3. Compute conditional probabilities of each node according to this set of promising solutions; 4. Assign each nurse using roulette-wheel selection based on the rules' conditional probabilities. A set of new rule strings O(t) will be generated in this way; 5. Create a new population P(t+1) by replacing some rule strings from P(t) with O(t), and set t = t+1; 6. If the termination conditions are not met (we use 2000 generations), go to step 2. Computational results from 52 real data instances demonstrate the success of this approach. They also suggest that the learning mechanism in the proposed approach might be suitable for other scheduling problems. Another direction for further research is to see if there is a good constructing sequence for individual data instances, given a fixed nurse scheduling order. If so, the good patterns could be recognized and then extracted as new domain knowledge. Thus, by using this extracted knowledge, we can assign specific rules to the corresponding nurses beforehand, and only schedule the remaining nurses with all available rules, making it possible to reduce the solution space. Acknowledgements The work was funded by the UK Government's major funding agency, Engineering and Physical Sciences Research Council (EPSRC), under grand GR/R92899/01. References [1] Aickelin U, "An Indirect Genetic Algorithm for Set Covering Problems", Journal of the Operational Research Society, 53(10): 1118-1126,
Resumo:
We present a solution to the problem of defining a counterpart in Algebraic Set Theory of the construction of internal sheaves in Topos Theory. Our approach is general in that we consider sheaves as determined by Lawvere-Tierney coverages, rather than by Grothen-dieck coverages, and assume only a weakening of the axioms for small maps originally introduced by Joyal and Moerdijk, thus subsuming the existing topos-theoretic results.
Resumo:
Descriptive set theory is mainly concerned with studying subsets of the space of all countable binary sequences. In this paper we study the generalization where countable is replaced by uncountable. We explore properties of generalized Baire and Cantor spaces, equivalence relations and their Borel reducibility. The study shows that the descriptive set theory looks very different in this generalized setting compared to the classical, countable case. We also draw the connection between the stability theoretic complexity of first-order theories and the descriptive set theoretic complexity of their isomorphism relations. Our results suggest that Borel reducibility on uncountable structures is a model theoretically natural way to compare the complexity of isomorphism relations.
Resumo:
Es discuteixen breument algunes consideracions sobre l'aplicació de la Teoria delsConjunts difusos a la Química quàntica. Es demostra aqui que molts conceptes químics associats a la teoria són adequats per ésser connectats amb l'estructura dels Conjunts difusos. També s'explica com algunes descripcions teoriques dels observables quàntics espotencien tractant-les amb les eines associades als esmentats Conjunts difusos. La funciódensitat es pren com a exemple de l'ús de distribucions de possibilitat al mateix temps queles distribucions de probabilitat quàntiques