869 resultados para Multi-Criteria Decision Aid (MCDA)
Resumo:
Intelligent agents offer a new and exciting way of understanding the world of work. Agent-Based Simulation (ABS), one way of using intelligent agents, carries great potential for progressing our understanding of management practices and how they link to retail performance. We have developed simulation models based on research by a multi-disciplinary team of economists, work psychologists and computer scientists. We will discuss our experiences of implementing these concepts working with a well-known retail department store. There is no doubt that management practices are linked to the performance of an organisation (Reynolds et al., 2005; Wall & Wood, 2005). Best practices have been developed, but when it comes down to the actual application of these guidelines considerable ambiguity remains regarding their effectiveness within particular contexts (Siebers et al., forthcoming a). Most Operational Research (OR) methods can only be used as analysis tools once management practices have been implemented. Often they are not very useful for giving answers to speculative ‘what-if’ questions, particularly when one is interested in the development of the system over time rather than just the state of the system at a certain point in time. Simulation can be used to analyse the operation of dynamic and stochastic systems. ABS is particularly useful when complex interactions between system entities exist, such as autonomous decision making or negotiation. In an ABS model the researcher explicitly describes the decision process of simulated actors at the micro level. Structures emerge at the macro level as a result of the actions of the agents and their interactions with other agents and the environment. We will show how ABS experiments can deal with testing and optimising management practices such as training, empowerment or teamwork. Hence, questions such as “will staff setting their own break times improve performance?” can be investigated.
Resumo:
Forested areas within cities host a large number of species, responsible for many ecosystem services in urban areas. The biodiversity in these areas is influenced by human disturbances such as atmospheric pollution and urban heat island effect. To ameliorate the effects of these factors, an increase in urban green areas is often considered sufficient. However, this approach assumes that all types of green cover have the same importance for species. Our aim was to show that not all forested green areas are equal in importance for species, but that based on a multi-taxa and functional diversity approach it is possible to value green infrastructure in urban environments. After evaluating the diversity of lichens, butterflies and other-arthropods, birds and mammals in 31 Mediterranean urban forests in south-west Europe (Almada, Portugal), bird and lichen functional groups responsive to urbanization were found. A community shift (tolerant species replacing sensitive ones) along the urbanization gradient was found, and this must be considered when using these groups as indicators of the effect of urbanization. Bird and lichen functional groups were then analyzed together with the characteristics of the forests and their surroundings. Our results showed that, contrary to previous assumptions, vegetation density and more importantly the amount of urban areas around the forest (matrix), are more important for biodiversity than forest quantity alone. This indicated that not all types of forested green areas have the same importance for biodiversity. An index of forest functional diversity was then calculated for all sampled forests of the area. This could help decision-makers to improve the management of urban green infrastructures with the goal of increasing functionality and ultimately ecosystem services in urban areas.
Resumo:
Background: The evidence base on end-of-life care in acute stroke is limited, particularly with regard to recognising dying and related decision-making. There is also limited evidence to support the use of end-of-life care pathways (standardised care plans) for patients who are dying after stroke. Aim: This study aimed to explore the clinical decision-making involved in placing patients on an end-of-life care pathway, evaluate predictors of care pathway use, and investigate the role of families in decision-making. The study also aimed to examine experiences of end-of-life care pathway use for stroke patients, their relatives and the multi-disciplinary health care team. Methods: A mixed methods design was adopted. Data were collected in four Scottish acute stroke units. Case-notes were identified prospectively from 100 consecutive stroke deaths and reviewed. Multivariate analysis was performed on case-note data. Semi-structured interviews were conducted with 17 relatives of stroke decedents and 23 healthcare professionals, using a modified grounded theory approach to collect and analyse data. The VOICES survey tool was also administered to the bereaved relatives and data were analysed using descriptive statistics and thematic analysis of free-text responses. Results: Relatives often played an important role in influencing aspects of end-of-life care, including decisions to use an end-of-life care pathway. Some relatives experienced enduring distress with their perceived responsibility for care decisions. Relatives felt unprepared for and were distressed by prolonged dying processes, which were often associated with severe dysphagia. Pro-active information-giving by staff was reported as supportive by relatives. Healthcare professionals generally avoided discussing place of care with families. Decisions to use an end-of-life care pathway were not predicted by patients’ demographic characteristics; decisions were generally made in consultation with families and the extended health care team, and were made within regular working hours. Conclusion: Distressing stroke-related issues were more prominent in participants’ accounts than concerns with the end-of-life care pathway used. Relatives sometimes perceived themselves as responsible for important clinical decisions. Witnessing prolonged dying processes was difficult for healthcare professionals and families, particularly in relation to the management of persistent major swallowing difficulties.
Resumo:
Background: This paper describes the results of a feasibility study for a randomised controlled trial (RCT). Methods: Twenty-nine members of the UK Dermatology Clinical Trials Network (UK DCTN) expressed an interest in recruiting for this study. Of these, 17 obtained full ethics and Research & Development (R&D) approval, and 15 successfully recruited patients into the study. A total of 70 participants with a diagnosis of cellulitis of the leg were enrolled over a 5-month period. These participants were largely recruited from medical admissions wards, although some were identified from dermatology, orthopaedic, geriatric and general surgery wards. Data were collected on patient demographics, clinical features and willingness to take part in a future RCT. Results: Despite being a relatively common condition, cellulitis patients were difficult to locate through our network of UK DCTN clinicians. This was largely because patients were rarely seen by dermatologists, and admissions were not co-ordinated centrally. In addition, the impact of the proposed exclusion criteria was high; only 26 (37%) of those enrolled in the study fulfilled all of the inclusion criteria for the subsequent RCT, and were willing to be randomised to treatment. Of the 70 participants identified during the study as having cellulitis of the leg (as confirmed by a dermatologist), only 59 (84%) had all 3 of the defining features of: i) erythema, ii) oedema, and iii) warmth with acute pain/tenderness upon examination. Twenty-two (32%) patients experienced a previous episode of cellulitis within the last 3 years. The median time to recurrence (estimated as the time since the most recent previous attack) was 205 days (95% CI 102 to 308). Service users were generally supportive of the trial, although several expressed concerns about taking antibiotics for lengthy periods, and felt that multiple morbidity/old age would limit entry into a 3-year study. Conclusion: This pilot study has been crucial in highlighting some key issues for the conduct of a future RCT. As a result of these findings, changes have been made to i) the planned recruitment strategy, ii) the proposed inclusion criteria and ii) the definition of cellulitis for use in the future trial.
Resumo:
Secure Multi-party Computation (MPC) enables a set of parties to collaboratively compute, using cryptographic protocols, a function over their private data in a way that the participants do not see each other's data, they only see the final output. Typical MPC examples include statistical computations over joint private data, private set intersection, and auctions. While these applications are examples of monolithic MPC, richer MPC applications move between "normal" (i.e., per-party local) and "secure" (i.e., joint, multi-party secure) modes repeatedly, resulting overall in mixed-mode computations. For example, we might use MPC to implement the role of the dealer in a game of mental poker -- the game will be divided into rounds of local decision-making (e.g. bidding) and joint interaction (e.g. dealing). Mixed-mode computations are also used to improve performance over monolithic secure computations. Starting with the Fairplay project, several MPC frameworks have been proposed in the last decade to help programmers write MPC applications in a high-level language, while the toolchain manages the low-level details. However, these frameworks are either not expressive enough to allow writing mixed-mode applications or lack formal specification, and reasoning capabilities, thereby diminishing the parties' trust in such tools, and the programs written using them. Furthermore, none of the frameworks provides a verified toolchain to run the MPC programs, leaving the potential of security holes that can compromise the privacy of parties' data. This dissertation presents language-based techniques to make MPC more practical and trustworthy. First, it presents the design and implementation of a new MPC Domain Specific Language, called Wysteria, for writing rich mixed-mode MPC applications. Wysteria provides several benefits over previous languages, including a conceptual single thread of control, generic support for more than two parties, high-level abstractions for secret shares, and a fully formalized type system and operational semantics. Using Wysteria, we have implemented several MPC applications, including, for the first time, a card dealing application. The dissertation next presents Wys*, an embedding of Wysteria in F*, a full-featured verification oriented programming language. Wys* improves on Wysteria along three lines: (a) It enables programmers to formally verify the correctness and security properties of their programs. As far as we know, Wys* is the first language to provide verification capabilities for MPC programs. (b) It provides a partially verified toolchain to run MPC programs, and finally (c) It enables the MPC programs to use, with no extra effort, standard language constructs from the host language F*, thereby making it more usable and scalable. Finally, the dissertation develops static analyses that help optimize monolithic MPC programs into mixed-mode MPC programs, while providing similar privacy guarantees as the monolithic versions.
Resumo:
Intelligent agents offer a new and exciting way of understanding the world of work. We apply agent-based simulation to investigate a set of problems in a retail context. Specifically, we are working to understand the relationship between human resource management practices and retail productivity. Our multi-disciplinary research team draws upon expertise from work psychologists and computer scientists. Our research so far has led us to conduct case study work with a top ten UK retailer. Based on our case study experience and data we are developing a simulator that can be used to investigate the impact of management practices (e.g. training, empowerment, teamwork) on customer satisfaction and retail productivity.
Resumo:
Part 18: Optimization in Collaborative Networks
Resumo:
Since policy-makers usually pursue several conflicting objectives, policy-making can be understood as a multicriteria decision problem. Following the methodological proposal by André and Cardenete (2005) André, F. J. and Cardenete, M. A. 2005. Multicriteria Policy Making. Defining Efficient Policies in a General Equilibrium Model, Seville: Centro de Estudios Andaluces. Working Paper No. E2005/04, multi-objective programming is used in connection with a computable general equilibrium model to represent optimal policy-making and to obtain so-called efficient policies in an application to a regional economy (Andalusia, Spain). This approach is applied to the design of subsidy policies under two different scenarios. In the first scenario, it is assumed that the government is concerned just about two objectives: ensuring the profitability of a key strategic sector and increasing overall output. Finally, the scope of the exercise is enlarged by solving a problem with seven policy objectives, including both general and sectorial objectives. It is concluded that the observed policy could have been Pareto-improved in several directions.
Resumo:
Macroeconomic policy makers are typically concerned with several indicators of economic performance. We thus propose to tackle the design of macroeconomic policy using Multicriteria Decision Making (MCDM) techniques. More specifically, we employ Multiobjective Programming (MP) to seek so-called efficient policies. The MP approach is combined with a computable general equilibrium (CGE) model. We chose use of a CGE model since they have the dual advantage of being consistent with standard economic theory while allowing one to measure the effect(s) of a specific policy with real data. Applying the proposed methodology to Spain (via the 1995 Social Accounting Matrix) we first quantified the trade-offs between two specific policy objectives: growth and inflation, when designing fiscal policy. We then constructed a frontier of efficient policies involving real growth and inflation. In doing so, we found that policy in 1995 Spain displayed some degree of inefficiency with respect to these two policy objectives. We then offer two sets of policy recommendations that, ostensibly, could have helped Spain at the time. The first deals with efficiency independent of the importance given to both growth and inflation by policy makers (we label this set: general policy recommendations). A second set depends on which policy objective is seen as more important by policy makers: increasing growth or controlling inflation (we label this one: objective-specific recommendations).
Resumo:
Energy Conservation Measure (ECM) project selection is made difficult given real-world constraints, limited resources to implement savings retrofits, various suppliers in the market and project financing alternatives. Many of these energy efficient retrofit projects should be viewed as a series of investments with annual returns for these traditionally risk-averse agencies. Given a list of ECMs available, federal, state and local agencies must determine how to implement projects at lowest costs. The most common methods of implementation planning are suboptimal relative to cost. Federal, state and local agencies can obtain greater returns on their energy conservation investment over traditional methods, regardless of the implementing organization. This dissertation outlines several approaches to improve the traditional energy conservations models. Any public buildings in regions with similar energy conservation goals in the United States or internationally can also benefit greatly from this research. Additionally, many private owners of buildings are under mandates to conserve energy e.g., Local Law 85 of the New York City Energy Conservation Code requires any building, public or private, to meet the most current energy code for any alteration or renovation. Thus, both public and private stakeholders can benefit from this research. The research in this dissertation advances and presents models that decision-makers can use to optimize the selection of ECM projects with respect to the total cost of implementation. A practical application of a two-level mathematical program with equilibrium constraints (MPEC) improves the current best practice for agencies concerned with making the most cost-effective selection leveraging energy services companies or utilities. The two-level model maximizes savings to the agency and profit to the energy services companies (Chapter 2). An additional model presented leverages a single congressional appropriation to implement ECM projects (Chapter 3). Returns from implemented ECM projects are used to fund additional ECM projects. In these cases, fluctuations in energy costs and uncertainty in the estimated savings severely influence ECM project selection and the amount of the appropriation requested. A risk aversion method proposed imposes a minimum on the number of “of projects completed in each stage. A comparative method using Conditional Value at Risk is analyzed. Time consistency was addressed in this chapter. This work demonstrates how a risk-based, stochastic, multi-stage model with binary decision variables at each stage provides a much more accurate estimate for planning than the agency’s traditional approach and deterministic models. Finally, in Chapter 4, a rolling-horizon model allows for subadditivity and superadditivity of the energy savings to simulate interactive effects between ECM projects. The approach makes use of inequalities (McCormick, 1976) to re-express constraints that involve the product of binary variables with an exact linearization (related to the convex hull of those constraints). This model additionally shows the benefits of learning between stages while remaining consistent with the single congressional appropriations framework.
Resumo:
Executing a cloud or aerosol physical properties retrieval algorithm from controlled synthetic data is an important step in retrieval algorithm development. Synthetic data can help answer questions about the sensitivity and performance of the algorithm or aid in determining how an existing retrieval algorithm may perform with a planned sensor. Synthetic data can also help in solving issues that may have surfaced in the retrieval results. Synthetic data become very important when other validation methods, such as field campaigns,are of limited scope. These tend to be of relatively short duration and often are costly. Ground stations have limited spatial coverage whilesynthetic data can cover large spatial and temporal scales and a wide variety of conditions at a low cost. In this work I develop an advanced cloud and aerosol retrieval simulator for the MODIS instrument, also known as Multi-sensor Cloud and Aerosol Retrieval Simulator (MCARS). In a close collaboration with the modeling community I have seamlessly combined the GEOS-5 global climate model with the DISORT radiative transfer code, widely used by the remote sensing community, with the observations from the MODIS instrument to create the simulator. With the MCARS simulator it was then possible to solve the long standing issue with the MODIS aerosol optical depth retrievals that had a low bias for smoke aerosols. MODIS aerosol retrieval did not account for effects of humidity on smoke aerosols. The MCARS simulator also revealed an issue that has not been recognized previously, namely,the value of fine mode fraction could create a linear dependence between retrieved aerosol optical depth and land surface reflectance. MCARS provided the ability to examine aerosol retrievals against “ground truth” for hundreds of thousands of simultaneous samples for an area covered by only three AERONET ground stations. Findings from MCARS are already being used to improve the performance of operational MODIS aerosol properties retrieval algorithms. The modeling community will use the MCARS data to create new parameterizations for aerosol properties as a function of properties of the atmospheric column and gain the ability to correct any assimilated retrieval data that may display similar dependencies in comparisons with ground measurements.
Resumo:
The usage of multi material structures in industry, especially in the automotive industry are increasing. To overcome the difficulties in joining these structures, adhesives have several benefits over traditional joining methods. Therefore, accurate simulations of the entire process of fracture including the adhesive layer is crucial. In this paper, material parameters of a previously developed meso mechanical finite element (FE) model of a thin adhesive layer are optimized using the Strength Pareto Evolutionary Algorithm (SPEA2). Objective functions are defined as the error between experimental data and simulation data. The experimental data is provided by previously performed experiments where an adhesive layer was loaded in monotonically increasing peel and shear. Two objective functions are dependent on 9 model parameters (decision variables) in total and are evaluated by running two FEsimulations, one is loading the adhesive layer in peel and the other in shear. The original study converted the two objective functions into one function that resulted in one optimal solution. In this study, however, a Pareto frontis obtained by employing the SPEA2 algorithm. Thus, more insight into the material model, objective functions, optimal solutions and decision space is acquired using the Pareto front. We compare the results and show good agreement with the experimental data.
Resumo:
The East Asian Monsoon (EAM) is an active component of the global climate system and has a profound social and economic impact in East Asia and its surrounding countries. Its impact on regional hydrological processes may influence society through industrial water supplies, food productivity and energy use. In order to predict future rates of climate change, reliable and accurate reconstructions of regional temperature and rainfall are required from all over the world to test climate models and better predict future climate variability. Hokkaido is a region which has limited palaeo-climate data and is sensitive to climate change. Instrumental data show that the climate in Hokkaido is influenced by the East Asian Monsoon (EAM), however, instrumental data is limited to the past ~150 years. Therefore down-core climate reconstructions, prior to instrumental records, are required to provide a better understanding of the long-term behaviour of the climate drivers (e.g. the EAM, Westerlies, and teleconnections) in this region. The present study develops multi-proxy reconstructions to determine past climatic and hydrologic variability in Japan over the past 1000 years and aid in understanding the effects of the EAM and the Westerlies independently and interactively. A 250-cm long sediment core from Lake Toyoni, Hokkaido was retrieved to investigate terrestrial and aquatic input, lake temperature and hydrological changes over the past 1000-years within Lake Toyoni and its catchment using X-Ray Fluorescence (XRF) data, alkenone palaeothermometry, the molecular and hydrogen isotopic composition of higher plant waxes (δD(HPW)). Here, we conducted the first survey for alkenone biomarkers in eight lakes in the Hokkaido, Japan. We detected the occurrence of alkenones within the sediments of Lake Toyoni. We present the first lacustrine alkenone record from Japan, including genetic analysis of the alkenone producer. C37 alkenone concentrations in surface sediments are 18µg C37 g−1 of dry sediment and the dominant alkenone is C37:4. 18S rDNA analysis revealed the presence of a single alkenone producer in Lake Toyoni and thus a single calibration is used for reconstructing lake temperature based on alkenone unsaturation patterns. Temperature reconstructions over the past 1000 years suggest that lake water temperatures varies between 8 and 19°C which is in line with water temperature changes observed in the modern Lake Toyoni. The alkenone-based temperature reconstruction provides evidence for the variability of the EAM over the past 1000 years. The δD(HPW) suggest that the large fluctuations (∼40‰) represent changes in temperature and source precipitation in this region, which is ultimately controlled by the EAM system and therefore a proxy for the EAM system. In order to complement the biomarker reconstructions, the XRF data strengthen the lake temperature and hydrological reconstructions by providing information on past productivity, which is controlled by the East Asian Summer monsoon (EASM) and wind input into Lake Toyoni, which is controlled by the East Asian Winter Monsoon (EAWM) and the Westerlies. By combining the data generated from XRF, alkenone palaeothermometry and the δD(HPW) reconstructions, we provide valuable information on the EAM and the Westerlies, including; the timing of intensification and weakening, the teleconnections influencing them and the relationship between them. During the Medieval Warm Period (MWP), we find that the EASM dominated and the EAWM was suppressed, whereas, during the Little Ice Age (LIA), the influence of the EAWM dominated with time periods of increased EASM and Westerlies intensification. The El Niño Southern Oscillation (ENSO) significantly influenced the EAM; a strong EASM occurred during El Niño conditions and a strong EAWM occurred during La Niña. The North Atlantic Oscillation, on the other hand, was a key driver of the Westerlies intensification; strengthening of the Westerlies during a positive NAO phase and weakening of the Westerlies during a negative NAO phase. A key finding from this study is that our data support an anti-phase relationship between the EASM and the EAWM (e.g. the intensification of the EASM and weakening of the EAWM and vice versa) and that the EAWM and the Westerlies vary independently from each other, rather than coincide as previously suggested in other studies.
Resumo:
La eliminación de barreras entre países es una consecuencia que llega con la globalización y con los acuerdos de TLC firmados en los últimos años. Esto implica un crecimiento significativo del comercio exterior, lo cual se ve reflejado en un aumento de la complejidad de la cadena de suministro de las empresas. Debido a lo anterior, se hace necesaria la búsqueda de alternativas para obtener altos niveles de productividad y competitividad dentro de las empresas en Colombia, ya que el entorno se ha vuelto cada vez más complejo, saturado de competencia no sólo nacional, sino también internacional. Para mantenerse en una posición competitiva favorable, las compañías deben enfocarse en las actividades que le agregan valor a su negocio, por lo cual una de las alternativas que se están adoptando hoy en día es la tercerización de funciones logísticas a empresas especializadas en el manejo de estos servicios. Tales empresas son los Proveedores de servicios logísticos (LSP), quienes actúan como agentes externos a la organización al gestionar, controlar y proporcionar actividades logísticas en nombre de un contratante. Las actividades realizadas pueden incluir todas o parte de las actividades logísticas, pero como mínimo la gestión y ejecución del transporte y almacenamiento deben estar incluidos (Berglund, 2000). El propósito del documento es analizar el papel de los Operadores Logísticos de Tercer nivel (3PL) como promotores del desempeño organizacional en las empresas colombianas, con el fin de informar a las MIPYMES acerca de los beneficios que se obtienen al trabajar con LSP como un medio para mejorar la posición competitiva del país.