911 resultados para multi attribute utility theory
Resumo:
Sustainability in software system is still a new practice that most software developers and companies are trying to incorporate into their software development lifecycle and has been largely discussed in academia. Sustainability is a complex concept viewed from economic, environment and social dimensions with several definitions proposed making sometimes the concept of sustainability very fuzzy and difficult to apply and assess in software systems. This has hindered the adoption of sustainability in the software industry. A little research explores sustainability as a quality property of software products and services to answer questions such as; How to quantify sustainability as a quality construct in the same way as other quality attributes such as security, usability and reliability? How can it be applied to software systems? What are the measures and measurement scale of sustainability? The Goal of this research is to investigate the definitions, perceptions and measurement of sustainability from the quality perspective. Grounded in the general theory of software measurement, the aim is to develop a method that decomposes sustainability in factors, criteria and metrics. The Result is a method to quantify and access sustainability of software systems while incorporating management and users concern. Conclusion: The method will empower the ability of companies to easily adopt sustainability while facilitating its integration to the software development process and tools. It will also help companies to measure sustainability of their software products from economic, environmental, social, individual and technological dimension.
Resumo:
When designing systems that are complex, dynamic and stochastic in nature, simulation is generally recognised as one of the best design support technologies, and a valuable aid in the strategic and tactical decision making process. A simulation model consists of a set of rules that define how a system changes over time, given its current state. Unlike analytical models, a simulation model is not solved but is run and the changes of system states can be observed at any point in time. This provides an insight into system dynamics rather than just predicting the output of a system based on specific inputs. Simulation is not a decision making tool but a decision support tool, allowing better informed decisions to be made. Due to the complexity of the real world, a simulation model can only be an approximation of the target system. The essence of the art of simulation modelling is abstraction and simplification. Only those characteristics that are important for the study and analysis of the target system should be included in the simulation model. The purpose of simulation is either to better understand the operation of a target system, or to make predictions about a target system’s performance. It can be viewed as an artificial white-room which allows one to gain insight but also to test new theories and practices without disrupting the daily routine of the focal organisation. What you can expect to gain from a simulation study is very well summarised by FIRMA (2000). His idea is that if the theory that has been framed about the target system holds, and if this theory has been adequately translated into a computer model this would allow you to answer some of the following questions: · Which kind of behaviour can be expected under arbitrarily given parameter combinations and initial conditions? · Which kind of behaviour will a given target system display in the future? · Which state will the target system reach in the future? The required accuracy of the simulation model very much depends on the type of question one is trying to answer. In order to be able to respond to the first question the simulation model needs to be an explanatory model. This requires less data accuracy. In comparison, the simulation model required to answer the latter two questions has to be predictive in nature and therefore needs highly accurate input data to achieve credible outputs. These predictions involve showing trends, rather than giving precise and absolute predictions of the target system performance. The numerical results of a simulation experiment on their own are most often not very useful and need to be rigorously analysed with statistical methods. These results then need to be considered in the context of the real system and interpreted in a qualitative way to make meaningful recommendations or compile best practice guidelines. One needs a good working knowledge about the behaviour of the real system to be able to fully exploit the understanding gained from simulation experiments. The goal of this chapter is to brace the newcomer to the topic of what we think is a valuable asset to the toolset of analysts and decision makers. We will give you a summary of information we have gathered from the literature and of the experiences that we have made first hand during the last five years, whilst obtaining a better understanding of this exciting technology. We hope that this will help you to avoid some pitfalls that we have unwittingly encountered. Section 2 is an introduction to the different types of simulation used in Operational Research and Management Science with a clear focus on agent-based simulation. In Section 3 we outline the theoretical background of multi-agent systems and their elements to prepare you for Section 4 where we discuss how to develop a multi-agent simulation model. Section 5 outlines a simple example of a multi-agent system. Section 6 provides a collection of resources for further studies and finally in Section 7 we will conclude the chapter with a short summary.
Resumo:
In the past few years, there has been a concern among economists and policy makers that increased openness to international trade affects some regions in a country more than others. Recent research has found that local labor markets more exposed to import competition through their initial employment composition experience worse outcomes in several dimensions such as, employment, wages, and poverty. Although there is evidence that regions within a country exhibit variation in the intensity with which they trade with each other and with other countries, trade linkages have been ignored in empirical analyses of the regional effects of trade, which focus on differences in employment composition. In this dissertation, I investigate how local labor markets' trade linkages shape the response of wages to international trade shocks. In the second chapter, I lay out a standard multi-sector general equilibrium model of trade, where domestic regions trade with each other and with the rest of the world. Using this benchmark, I decompose a region's wage change resulting from a national import cost shock into a direct effect on prices, holding other endogenous variables constant, and a series of general equilibrium effects. I argue the direct effect provides a natural measure of exposure to import competition within the model since it summarizes the effect of the shock on a region's wage as a function of initial conditions given by its trade linkages. I call my proposed measure linkage exposure while I refer to the measures used in previous studies as employment exposure. My theoretical analysis also shows that the assumptions previous studies make on trade linkages are not consistent with the standard trade model. In the third chapter, I calibrate the model to the Brazilian economy in 1991--at the beginning of a period of trade liberalization--to perform a series of experiments. In each of them, I reduce the Brazilian import cost by 1 percent in a single sector and I calculate how much of the cross-regional variation in counterfactual wage changes is explained by exposure measures. Over this set of experiments, employment exposure explains, for the median sector, 2 percent of the variation in counterfactual wage changes while linkage exposure explains 44 percent. In addition, I propose an estimation strategy that incorporates trade linkages in the analysis of the effects of trade on observed wages. In the model, changes in wages are completely determined by changes in market access, an endogenous variable that summarizes the real demand faced by a region. I show that a linkage measure of exposure is a valid instrument for changes in market access within Brazil. By using observed wage changes in Brazil between 1991-2000, my estimates imply that a region at the 25th percentile of the change in domestic market access induced by trade liberalization, experiences a 0.6 log points larger wage decline (or smaller wage increase) than a region at the 75th percentile. The estimates from a regression of wages changes on exposure imply that a region at the 25th percentile of exposure experiences a 3 log points larger wage decline (or smaller wage increase) than a region at the 75th percentile. I conclude that estimates based on exposure overstate the negative impact of trade liberalization on wages in Brazil. In the fourth chapter, I extend the standard model to allow for two types of workers according to their education levels: skilled and unskilled. I show that there is substantial variation across Brazilian regions in the skill premium. I use the exogenous variation provided by tariff changes to estimate the impact of market access on the skill premium. I find that decreased domestic market access resulting from trade liberalization resulted in a higher skill premium. I propose a mechanism to explain this result: that the manufacturing sector is relatively more intensive in unskilled labor and I show empirical evidence that supports this hypothesis.
Resumo:
Macroeconomic policy makers are typically concerned with several indicators of economic performance. We thus propose to tackle the design of macroeconomic policy using Multicriteria Decision Making (MCDM) techniques. More specifically, we employ Multiobjective Programming (MP) to seek so-called efficient policies. The MP approach is combined with a computable general equilibrium (CGE) model. We chose use of a CGE model since they have the dual advantage of being consistent with standard economic theory while allowing one to measure the effect(s) of a specific policy with real data. Applying the proposed methodology to Spain (via the 1995 Social Accounting Matrix) we first quantified the trade-offs between two specific policy objectives: growth and inflation, when designing fiscal policy. We then constructed a frontier of efficient policies involving real growth and inflation. In doing so, we found that policy in 1995 Spain displayed some degree of inefficiency with respect to these two policy objectives. We then offer two sets of policy recommendations that, ostensibly, could have helped Spain at the time. The first deals with efficiency independent of the importance given to both growth and inflation by policy makers (we label this set: general policy recommendations). A second set depends on which policy objective is seen as more important by policy makers: increasing growth or controlling inflation (we label this one: objective-specific recommendations).
Resumo:
Personal electronic devices, such as cell phones and tablets, continue to decrease in size while the number of features and add-ons keep increasing. One particular feature of great interest is an integrated projector system. Laser pico-projectors have been considered, but the technology has not been developed enough to warrant integration. With new advancements in diode technology and MEMS devices, laser-based projection is currently being advanced for pico-projectors. A primary problem encountered when using a pico-projector is coherent interference known as speckle. Laser speckle can lead to eye irritation and headaches after prolonged viewing. Diffractive optical elements known as diffusers have been examined as a means to lower speckle contrast. Diffusers are often rotated to achieve temporal averaging of the spatial phase pattern provided by diffuser surface. While diffusers are unable to completely eliminate speckle, they can be utilized to decrease the resultant contrast to provide a more visually acceptable image. This dissertation measures the reduction in speckle contrast achievable through the use of diffractive diffusers. A theoretical Fourier optics model is used to provide the diffuser’s stationary and in-motion performance in terms of the resultant contrast level. Contrast measurements of two diffractive diffusers are calculated theoretically and compared with experimental results. In addition, a novel binary diffuser design based on Hadamard matrices will be presented. Using two static in-line Hadamard diffusers eliminates the need for rotation or vibration of the diffuser for temporal averaging. Two Hadamard diffusers were fabricated and contrast values were subsequently measured, showing good agreement with theory and simulated values. Monochromatic speckle contrast values of 0.40 were achieved using the Hadamard diffusers. Finally, color laser projection devices require the use of red, green, and blue laser sources; therefore, using a monochromatic diffractive diffuser may not optimal for color speckle contrast reduction. A simulation of the Hadamard diffusers is conducted to determine the optimum spacing between the two diffusers for polychromatic speckle reduction. Experimental measured results are presented using the optimal spacing of Hadamard diffusers for RGB color speckle reduction, showing 60% reduction in contrast.
Resumo:
Purpose: To assess the compliance of Daily Disposable Contact Lenses (DDCLs) wearers with replacing lenses at a manufacturer-recommended replacement frequency. To evaluate the ability of two different Health Behavioural Theories (HBT), The Health Belief Model (HBM) and The Theory of Planned Behaviour (TPB), in predicting compliance. Method: A multi-centre survey was conducted using a questionnaire completed anonymously by contact lens wearers during the purchase of DDCLs. Results: Three hundred and fifty-four questionnaires were returned. The survey comprised 58.5% females and 41.5% males (mean age 34. ±. 12. years). Twenty-three percent of respondents were non-compliant with manufacturer-recommended replacement frequency (re-using DDCLs at least once). The main reason for re-using DDCLs was "to save money" (35%). Predictions of compliance behaviour (past behaviour or future intentions) on the basis of the two HBT was investigated through logistic regression analysis: both TPB factors (subjective norms and perceived behavioural control) were significant (p. <. 0.01); HBM was less predictive with only the severity (past behaviour and future intentions) and perceived benefit (only for past behaviour) as significant factors (p. <. 0.05). Conclusions: Non-compliance with DDCLs replacement is widespread, affecting 1 out of 4 Italian wearers. Results from the TPB model show that the involvement of persons socially close to the wearers (subjective norms) and the improvement of the procedure of behavioural control of daily replacement (behavioural control) are of paramount importance in improving compliance. With reference to the HBM, it is important to warn DDCLs wearers of the severity of a contact-lens-related eye infection, and to underline the possibility of its prevention.
Resumo:
The frequency, time and places of charging have large impact on the Quality of Experience (QoE) of EV drivers. It is critical to design effective EV charging scheduling system to improve the QoE of EV drivers. In order to improve EV charging QoE and utilization of CSs, we develop an innovative travel plan aware charging scheduling scheme for moving EVs to be charged at Charging Stations (CS). In the design of the proposed charging scheduling scheme for moving EVs, the travel routes of EVs and the utility of CSs are taken into consideration. The assignment of EVs to CSs is modeled as a two-sided many-to-one matching game with the objective of maximizing the system utility which reflects the satisfactory degrees of EVs and the profits of CSs. A Stable Matching Algorithm (SMA) is proposed to seek stable matching between charging EVs and CSs. Furthermore, an improved Learning based On-LiNe scheduling Algorithm (LONA) is proposed to be executed by each CS in a distributed manner. The performance gain of the average system utility by the SMA is up to 38.2% comparing to the Random Charging Scheduling (RCS) algorithm, and 4.67% comparing to Only utility of Electric Vehicle Concerned (OEVC) scheme. The effectiveness of the proposed SMA and LONA is also demonstrated by simulations in terms of the satisfactory ratio of charging EVs and the the convergence speed of iteration.
Resumo:
Background: The present study tested the utility of the theory of planned behaviour (TPB), augmented with anticipated regret, as a model to predict binge-drinking intentions and episodes among female and male undergraduates and undergraduates in different years of study. Method: Undergraduate students (N = 180, 54 males, 126 females, 60 per year of study) completed baseline measures of demographic variables, binge-drinking episodes (BDE), TPB constructs and anticipated regret. BDE were assessed one-week later. Results: The TPB accounted for 60% of the variance in female undergraduates' intentions and 54% of the variance in male undergraduates' intentions. The TPB accounted for 57% of the variance in intentions in first-year undergraduates, 63% of the variance in intentions in second-year undergraduates and 68% of the variance in intentions in final-year undergraduates. Follow-up BDE was predicted by intentions and baseline BDE for female undergraduates as well as second- and final-year undergraduates. Baseline BDE predicted male undergraduates’ follow-up BDE and first-year undergraduates’ follow-up BDE. Conclusion: Results show that while the TPB constructs predict undergraduates’ binge-drinking intentions, intentions only predict BDE in female undergraduates, second- and final-year undergraduates. Implications of these findings for interventions to reduce binge drinking are outlined.
Resumo:
Data sources are often dispersed geographically in real life applications. Finding a knowledge model may require to join all the data sources and to run a machine learning algorithm on the joint set. We present an alternative based on a Multi Agent System (MAS): an agent mines one data source in order to extract a local theory (knowledge model) and then merges it with the previous MAS theory using a knowledge fusion technique. This way, we obtain a global theory that summarizes the distributed knowledge without spending resources and time in joining data sources. New experiments have been executed including statistical significance analysis. The results show that, as a result of knowledge fusion, the accuracy of initial theories is significantly improved as well as the accuracy of the monolithic solution.
Resumo:
El objetivo de la presente investigación fue identificar la relación entre ideación suicida y desesperanza en 160 pacientes con cáncer. La ideación suicida se midió a través de dos ítems de una entrevista semiestructurada, la escala de ideación suicida (ISS), el ítem 9 del inventario de depresión de Beck (BDI-IA). La desesperanza se midió con la escala de desesperanza de Beck (BHS). Los resultados obtenidos indicaron una relación significativa (p=.000) entre ideación suicida y desesperanza; una prevalencia de ideación suicida en los pacientes con cáncer entre 4.4% y 13.8% y de riesgo de suicidio entre 5.6% y 30.6%; y algún grado de desesperanza en 31.9 % de los participantes. De acuerdo con lo anterior, se confirma que existe relación entre la desesperanza y la ideación suicida en pacientes oncológicos adultos. Adicionalmente, que estas variables están presentes en los pacientes y que ameritan atención en la intervención interdisciplinaria.
Resumo:
La eliminación de barreras entre países es una consecuencia que llega con la globalización y con los acuerdos de TLC firmados en los últimos años. Esto implica un crecimiento significativo del comercio exterior, lo cual se ve reflejado en un aumento de la complejidad de la cadena de suministro de las empresas. Debido a lo anterior, se hace necesaria la búsqueda de alternativas para obtener altos niveles de productividad y competitividad dentro de las empresas en Colombia, ya que el entorno se ha vuelto cada vez más complejo, saturado de competencia no sólo nacional, sino también internacional. Para mantenerse en una posición competitiva favorable, las compañías deben enfocarse en las actividades que le agregan valor a su negocio, por lo cual una de las alternativas que se están adoptando hoy en día es la tercerización de funciones logísticas a empresas especializadas en el manejo de estos servicios. Tales empresas son los Proveedores de servicios logísticos (LSP), quienes actúan como agentes externos a la organización al gestionar, controlar y proporcionar actividades logísticas en nombre de un contratante. Las actividades realizadas pueden incluir todas o parte de las actividades logísticas, pero como mínimo la gestión y ejecución del transporte y almacenamiento deben estar incluidos (Berglund, 2000). El propósito del documento es analizar el papel de los Operadores Logísticos de Tercer nivel (3PL) como promotores del desempeño organizacional en las empresas colombianas, con el fin de informar a las MIPYMES acerca de los beneficios que se obtienen al trabajar con LSP como un medio para mejorar la posición competitiva del país.
Resumo:
Amid the trend of rising health expenditure in developed economies, changing the healthcare delivery models is an important point of action for service regulators to contain this trend. Such a change is mostly induced by either financial incentives or regulatory tools issued by the regulators and targeting service providers and patients. This creates a tripartite interaction between service regulators, professionals, and patients that manifests a multi-principal agent relationship, in which professionals are agents to two principals: regulators and patients. This thesis is concerned with such a multi-principal agent relationship in healthcare and attempts to investigate the determinants of the (non-)compliance to regulatory tools in light of this tripartite relationship. In addition, the thesis provides insights into the different institutional, economic, and regulatory settings, which govern the multi-principal agent relationship in healthcare in different countries. Furthermore, the thesis provides and empirically tests a conceptual framework of the possible determinants of (non-)compliance by physicians to regulatory tools issued by the regulator. The main findings of the thesis are first, in a multi-principal agent setting, the utilization of financial incentives to align the objectives of professionals and the regulator is important but not the only solution. This finding is based on the heterogeneity in the financial incentives provided to professionals in different health markets, which does not provide a one-size-fits-all model of financial incentives to influence clinical decisions. Second, soft law tools as clinical practice guidelines (CPGs) are important tools to mitigate the problems of the multi-principal agent setting in health markets as they reduce information asymmetries while preserving the autonomy of professionals. Third, CPGs are complex and heterogeneous and so are the determinants of (non-)compliance to them. Fourth, CPGs work but under conditions. Factors such as intra-professional competition between service providers or practitioners might lead to non-compliance to CPGs – if CPGs are likely to reduce the professional’s utility. Finally, different degrees of soft law mandate have different effects on providers’ compliance. Generally, the stronger the mandate, the stronger the compliance, however, even with a strong mandate, drivers such as intra-professional competition and co-management of patients by different professionals affected the (non-)compliance.
Resumo:
In rural and isolated areas without cellular coverage, Satellite Communication (SatCom) is the best candidate to complement terrestrial coverage. However, the main challenge for future generations of wireless networks will be to meet the growing demand for new services while dealing with the scarcity of frequency spectrum. As a result, it is critical to investigate more efficient methods of utilizing the limited bandwidth; and resource sharing is likely the only choice. The research community’s focus has recently shifted towards the interference management and exploitation paradigm to meet the increasing data traffic demands. In the Downlink (DL) and Feedspace (FS), LEO satellites with an on-board antenna array can offer service to numerous User Terminals (UTs) (VSAT or Handhelds) on-ground in FFR schemes by using cutting-edge digital beamforming techniques. Considering this setup, the adoption of an effective user scheduling approach is a critical aspect given the unusually high density of User terminals on the ground as compared to the on-board available satellite antennas. In this context, one possibility is that of exploiting clustering algorithms for scheduling in LEO MU-MIMO systems in which several users within the same group are simultaneously served by the satellite via Space Division Multiplexing (SDM), and then these different user groups are served in different time slots via Time Division Multiplexing (TDM). This thesis addresses this problem by defining a user scheduling problem as an optimization problem and discusses several algorithms to solve it. In particular, focusing on the FS and user service link (i.e., DL) of a single MB-LEO satellite operating below 6 GHz, the user scheduling problem in the Frequency Division Duplex (FDD) mode is addressed. The proposed State-of-the-Art scheduling approaches are based on graph theory. The proposed solution offers high performance in terms of per-user capacity, Sum-rate capacity, SINR, and Spectral Efficiency.
Resumo:
In this master's thesis, the formation of Primordial Black Holes (PBHs) in the context of multi-field inflation is studied. In these models, the interaction of isocurvature and curvature perturbations can lead to a significant enhancement of the latter, and to the subsequent production of PBHs. Depending on their mass, these can account for a significant fraction (or, in some cases, the entirety) of the universe's Dark Matter content. After studying the theoretical framework of generic N-field inflationary models, the focus is restricted to the two-field case, for which a few concrete realisations are analysed. A numerical code (written in Wolfram Mathematica) is developed to make quantitative predictions for the main inflationary observables, notably the scalar power spectra. Parallelly, the production of PBHs due to the dynamics of 2-field inflation is examined: their mass, as well as the fraction of Dark Matter they represent, is calculated for the models considered previously.
Resumo:
In this work the fundamental ideas to study properties of QFTs with the functional Renormalization Group are presented and some examples illustrated. First the Wetterich equation for the effective average action and its flow in the local potential approximation (LPA) for a single scalar field is derived. This case is considered to illustrate some techniques used to solve the RG fixed point equation and study the properties of the critical theories in D dimensions. In particular the shooting methods for the ODE equation for the fixed point potential as well as the approach which studies a polynomial truncation with a finite number of couplings, which is convenient to study the critical exponents. We then study novel cases related to multi field scalar theories, deriving the flow equations for the LPA truncation, both without assuming any global symmetry and also specialising to cases with a given symmetry, using truncations based on polynomials of the symmetry invariants. This is used to study possible non perturbative solutions of critical theories which are extensions of known perturbative results, obtained in the epsilon expansion below the upper critical dimension.