853 resultados para Probabilistic decision process model
Resumo:
O uso da comunicação de voz e dados através de dispositivos móveis vem aumentando significativamente nos últimos anos. Tal expansão traz algumas dificuldades inerentes, tais como: ampliação constante de capacidade das redes e eficiência energética. Neste contexto, vem se consolidando o conceito de Green networks, que se concentra no esforço para economia de energia e redução de CO2. Neste sentido, este trabalho propõe validar um modelo de uma política baseado em processo markoviano de decisão, visando a otimizar o consumo de energia, QoS e QoE, na alocação de usuários em redes macrocell e femtocell. Para isso o modelo foi inserido no simulador NS-2, aliando a solução analítica markoviana à flexibilidade característica da simulação discreta. A partir dos resultados apresentados na simulação, a política obteve uma economia significativa no consumo energético, melhorando a eficiência energética em até 4%, além de melhorar a qualidade de serviço em relação às redes macrocell e femtocell, demonstrando-se eficaz, de modo a alterar diretamente as métricas de QoS e de QoE.
Resumo:
The implementation of an Export and Processing Zone (ZEPs) brings several benefits to the local, state and federal economy, but often, only socioeconomic factors are considered, apart from several other factors that should be analyzed, such as the environment. In this context of industrialization and the struggle for sustainable development, this work propose to incorporate the environmental variable in the decision process for establishing industrial areas, in particular, the ZPE in the city of Fernandópolis, São Paulo state, Brazil, by examining several physical and environmental factors such as slope intervals, geological features, pedological factors and land use. Developed using a multicriteria analysis, a model has been elaborated, where these factors have received a proportional value according with their importance, supported by a GIS tool (Geographical Information System) and remote sensing products, such as images from CBERS satellite and SRTM radar, showing the suited areas for industrial activities, considering environmental conditions. This model may assist to take better decision about the ZPE implementation area and to reduce the negative environmental impacts that would result of poorly planned locations
Resumo:
Psychological factors are gaining more space in sports, and increasingly common related professional psychology are inserted in the sporting context. Seeking a better understanding of the manifestations of leadership in football between technicians of different categories this study aimed to verify whether there is a preferred style of leadership among the football coaches and if there are differences between the leadership style ideal and real seconded by same. The methodology used the Search Specification (CERVO and BERVIAN, 2004) relying on the implementation of the Revised Leadership Scale for Sport (ELRE), ideal and real versions Profile, With the participation of twenty football coaches in the field (n = 20), working in teams of males in the City of São Bernardo do Campo - SP, and to process the data we calculated the Cronbach alpha to verify the reliability of the scale, and the average of the results relying on the software application SPSS version 17.0 for Windows. Of the total participants 30% have a degree in Physical Education and are inserted in the football an average of 8 years in different roles and the prevalence of autocratic decision-making model, with an alpha of 0.87 results in the ideal profile and the actual profile of alpha 0.86 , So the scale is stable and reliable. We conclude that the model of autocratic decision not differ very significantly compared to the model of democratic decision. Regarding the interaction with the group of technicians the situational model is highlighted in research showing that technicians take into account situational factors.
Resumo:
Eucalyptus plantations occupy almost 20 million ha worldwide and exceed 3.7 million ha in Brazil alone. Improved genetics and silviculture have led to as much as a three-fold increase in productivity in Eucalyptus plantations in Brazil and the large land area occupied by these highly productive ecosystems raises concern over their effect on local water supplies. As part of the Brazil Potential Productivity Project, we measured water use of Eucalyptus grandis x urophylla clones in rainfed and irrigated stands in two plantations differing in productivity. The Aracruz (lower productivity) site is located in the state of Espirito Santo and the Veracel (higher productivity) site in Bahia state. At each plantation, we measured stand water use using homemade sap flow sensors and a calibration curve using the clones and probes we utilized in the study. We also quantified changes in growth, leaf area and water use efficiency (the amount of wood produced per unit of water transpired). Measurements were conducted for 1 year during 2005 at Aracruz and from August through December 2005 at Veracel. Transpiration at both sites was high compared to other studies but annual estimates at Aracruz for the rainfed treatment compared well with a process model calibrated for the Aracruz site (within 10%). Annual water use at Aracruz was 1394 mm in rainfed treatments versus 1779 mm in irrigated treatments and accounted for approximately 67% and 58% of annual precipitation and irrigation inputs respectively. Increased water use in the irrigated stands at Aracruz was associated with higher sapwood area, leaf area index and transpiration per unit leaf area but there was no difference in the response of canopy conductance with air saturation deficit between treatments. Water use efficiency at the Aracruz site was also not influenced by irrigation and was similar to the rainfed treatment. During the period of overlapping measurements, the response to irrigation treatments at the more productive Veracel site was similar to Aracruz. Stand water use at the Veracel site totaled 975 mm and 1102 mm in rainfed and irrigated treatments during the 5-month measurement period respectively. Irrigated stands at Veracel also had higher leaf area with no difference in the response of canopy conductance with air saturation deficit between treatments. Water use efficiency was also unaffected by irrigation at Veracel. Results from this and other studies suggest that improved resource availability does not negatively impact water use efficiency but increased productivity of these plantations is associated with higher water use and should be given consideration during plantation management decision making processes aimed at increasing productivity. Published by Elsevier B.V.
Resumo:
A decision analytical model is presented and analysed to assess the effectiveness and cost-effectiveness of routine vaccination against varicella and herpes-zoster, or shingles. These diseases have as common aetiological agent the varicella-zoster virus (VZV). Zoster can more likely occur in aged people with declining cell-mediated immunity. The general concern is that universal varicella vaccination might lead to more cases of zoster: with more vaccinated children exposure of the general population to varicella infectives become smaller and thus a larger proportion of older people will have weaker immunity to VZV, leading to more cases of reactivation of zoster. Our compartment model shows that only two possible equilibria exist, one without varicella and the other one where varicella arid zoster both thrive. Threshold quantities to distinguish these cases are derived. Cost estimates on a possible herd vaccination program are discussed indicating a possible tradeoff choice.
Resumo:
In electronic commerce, systems development is based on two fundamental types of models, business models and process models. A business model is concerned with value exchanges among business partners, while a process model focuses on operational and procedural aspects of business communication. Thus, a business model defines the what in an e-commerce system, while a process model defines the how. Business process design can be facilitated and improved by a method for systematically moving from a business model to a process model. Such a method would provide support for traceability, evaluation of design alternatives, and seamless transition from analysis to realization. This work proposes a unified framework that can be used as a basis to analyze, to interpret and to understand different concepts associated at different stages in e-Commerce system development. In this thesis, we illustrate how UN/CEFACT’s recommended metamodels for business and process design can be analyzed, extended and then integrated for the final solutions based on the proposed unified framework. Also, as an application of the framework, we demonstrate how process-modeling tasks can be facilitated in e-Commerce system design. The proposed methodology, called BP3 stands for Business Process Patterns Perspective. The BP3 methodology uses a question-answer interface to capture different business requirements from the designers. It is based on pre-defined process patterns, and the final solution is generated by applying the captured business requirements by means of a set of production rules to complete the inter-process communication among these patterns.
Resumo:
Asset Management (AM) is a set of procedures operable at the strategic-tacticaloperational level, for the management of the physical asset’s performance, associated risks and costs within its whole life-cycle. AM combines the engineering, managerial and informatics points of view. In addition to internal drivers, AM is driven by the demands of customers (social pull) and regulators (environmental mandates and economic considerations). AM can follow either a top-down or a bottom-up approach. Considering rehabilitation planning at the bottom-up level, the main issue would be to rehabilitate the right pipe at the right time with the right technique. Finding the right pipe may be possible and practicable, but determining the timeliness of the rehabilitation and the choice of the techniques adopted to rehabilitate is a bit abstruse. It is a truism that rehabilitating an asset too early is unwise, just as doing it late may have entailed extra expenses en route, in addition to the cost of the exercise of rehabilitation per se. One is confronted with a typical ‘Hamlet-isque dilemma’ – ‘to repair or not to repair’; or put in another way, ‘to replace or not to replace’. The decision in this case is governed by three factors, not necessarily interrelated – quality of customer service, costs and budget in the life cycle of the asset in question. The goal of replacement planning is to find the juncture in the asset’s life cycle where the cost of replacement is balanced by the rising maintenance costs and the declining level of service. System maintenance aims at improving performance and maintaining the asset in good working condition for as long as possible. Effective planning is used to target maintenance activities to meet these goals and minimize costly exigencies. The main objective of this dissertation is to develop a process-model for asset replacement planning. The aim of the model is to determine the optimal pipe replacement year by comparing, temporally, the annual operating and maintenance costs of the existing asset and the annuity of the investment in a new equivalent pipe, at the best market price. It is proposed that risk cost provide an appropriate framework to decide the balance between investment for replacing or operational expenditures for maintaining an asset. The model describes a practical approach to estimate when an asset should be replaced. A comprehensive list of criteria to be considered is outlined, the main criteria being a visà- vis between maintenance and replacement expenditures. The costs to maintain the assets should be described by a cost function related to the asset type, the risks to the safety of people and property owing to declining condition of asset, and the predicted frequency of failures. The cost functions reflect the condition of the existing asset at the time the decision to maintain or replace is taken: age, level of deterioration, risk of failure. The process model is applied in the wastewater network of Oslo, the capital city of Norway, and uses available real-world information to forecast life-cycle costs of maintenance and rehabilitation strategies and support infrastructure management decisions. The case study provides an insight into the various definitions of ‘asset lifetime’ – service life, economic life and physical life. The results recommend that one common value for lifetime should not be applied to the all the pipelines in the stock for investment planning in the long-term period; rather it would be wiser to define different values for different cohorts of pipelines to reduce the uncertainties associated with generalisations for simplification. It is envisaged that more criteria the municipality is able to include, to estimate maintenance costs for the existing assets, the more precise will the estimation of the expected service life be. The ability to include social costs enables to compute the asset life, not only based on its physical characterisation, but also on the sensitivity of network areas to social impact of failures. The type of economic analysis is very sensitive to model parameters that are difficult to determine accurately. The main value of this approach is the effort to demonstrate that it is possible to include, in decision-making, factors as the cost of the risk associated with a decline in level of performance, the level of this deterioration and the asset’s depreciation rate, without looking at age as the sole criterion for making decisions regarding replacements.
Resumo:
This research has been triggered by an emergent trend in customer behavior: customers have rapidly expanded their channel experiences and preferences beyond traditional channels (such as stores) and they expect the company with which they do business to have a presence on all these channels. This evidence has produced an increasing interest in multichannel customer behavior and it has motivated several researchers to study the customers’ channel choices dynamics in multichannel environment. We study how the consumer decision process for channel choice and response to marketing communications evolves for a cohort of new customers. We assume a newly acquired customer’s decisions are described by a “trial” model, but the customer’s choice process evolves to a “post-trial” model as the customer learns his or her preferences and becomes familiar with the firm’s marketing efforts. The trial and post-trial decision processes are each described by different multinomial logit choice models, and the evolution from the trial to post-trial model is determined by a customer-level geometric distribution that captures the time it takes for the customer to make the transition. We utilize data for a major retailer who sells in three channels – retail store, the Internet, and via catalog. The model is estimated using Bayesian methods that allow for cross-customer heterogeneity. This allows us to have distinct parameters estimates for a trial and an after trial stages and to estimate the quickness of this transit at the individual level. The results show for example that the customer decision process indeed does evolve over time. Customers differ in the duration of the trial period and marketing has a different impact on channel choice in the trial and post-trial stages. Furthermore, we show that some people switch channel decision processes while others don’t and we found that several factors have an impact on the probability to switch decision process. Insights from this study can help managers tailor their marketing communication strategy as customers gain channel choice experience. Managers may also have insights on the timing of the direct marketing communications. They can predict the duration of the trial phase at individual level detecting the customers with a quick, long or even absent trial phase. They can even predict if the customer will change or not his decision process over time, and they can influence the switching process using specific marketing tools
Resumo:
This thesis presents a creative and practical approach to dealing with the problem of selection bias. Selection bias may be the most important vexing problem in program evaluation or in any line of research that attempts to assert causality. Some of the greatest minds in economics and statistics have scrutinized the problem of selection bias, with the resulting approaches – Rubin’s Potential Outcome Approach(Rosenbaum and Rubin,1983; Rubin, 1991,2001,2004) or Heckman’s Selection model (Heckman, 1979) – being widely accepted and used as the best fixes. These solutions to the bias that arises in particular from self selection are imperfect, and many researchers, when feasible, reserve their strongest causal inference for data from experimental rather than observational studies. The innovative aspect of this thesis is to propose a data transformation that allows measuring and testing in an automatic and multivariate way the presence of selection bias. The approach involves the construction of a multi-dimensional conditional space of the X matrix in which the bias associated with the treatment assignment has been eliminated. Specifically, we propose the use of a partial dependence analysis of the X-space as a tool for investigating the dependence relationship between a set of observable pre-treatment categorical covariates X and a treatment indicator variable T, in order to obtain a measure of bias according to their dependence structure. The measure of selection bias is then expressed in terms of inertia due to the dependence between X and T that has been eliminated. Given the measure of selection bias, we propose a multivariate test of imbalance in order to check if the detected bias is significant, by using the asymptotical distribution of inertia due to T (Estadella et al. 2005) , and by preserving the multivariate nature of data. Further, we propose the use of a clustering procedure as a tool to find groups of comparable units on which estimate local causal effects, and the use of the multivariate test of imbalance as a stopping rule in choosing the best cluster solution set. The method is non parametric, it does not call for modeling the data, based on some underlying theory or assumption about the selection process, but instead it calls for using the existing variability within the data and letting the data to speak. The idea of proposing this multivariate approach to measure selection bias and test balance comes from the consideration that in applied research all aspects of multivariate balance, not represented in the univariate variable- by-variable summaries, are ignored. The first part contains an introduction to evaluation methods as part of public and private decision process and a review of the literature of evaluation methods. The attention is focused on Rubin Potential Outcome Approach, matching methods, and briefly on Heckman’s Selection Model. The second part focuses on some resulting limitations of conventional methods, with particular attention to the problem of how testing in the correct way balancing. The third part contains the original contribution proposed , a simulation study that allows to check the performance of the method for a given dependence setting and an application to a real data set. Finally, we discuss, conclude and explain our future perspectives.
Resumo:
The recent default of important Italian agri-business companies provides a challenging issue to be investigated through an appropriate scientific approach. The events involving CIRIO, FERRUZZI or PARMALAT rise an important research question: what are the determinants of performance for Italian companies in the Italian agri – food sector? My aim is not to investigate all the factors that are relevant in explaining performance. Performance depends on a wide set of political, social, economic variables that are strongly interconnected and that are often very difficult to express by formal or mathematical tools. Rather, in my thesis I mainly focus on those aspects that are strictly related to the governance and ownership structure of agri – food companies representing a strand of research that has been quite neglected by previous scholars. The conceptual framework from which I move to justify the existence of a relationship between the ownership structure of a company, governance and performance is the model set up by Airoldi and Zattoni (2005). In particular the authors investigate the existence of complex relationships arising within the company and between the company and the environment that can bring different strategies and performances. They do not try to find the “best” ownership structure, rather they outline what variables are connected and how they could vary endogenously within the whole economic system. In spite of the fact that the Airoldi and Zattoni’s model highlights the existence of a relationship between ownership and structure that is crucial for the set up of the thesis the authors fail to apply quantitative analyses in order to verify the magnitude, sign and the causal direction of the impact. In order to fill this gap we start from the literature trying to investigate the determinants of performance. Even in this strand of research studies analysing the relationship between different forms of ownership and performance are still lacking. In this thesis, after a brief description of the Italian agri – food sector and after an introduction including a short explanation of the definitions of performance and ownership structure, I implement a model in which the performance level (interpreted here as Return on Investments and Return on Sales) is related to variables that have been previously identified by the literature as important such as the financial variables (cash and leverage indices), the firm location (North Italy, Centre Italy, South Italy), the power concentration (lower than 25%, between 25% and 50% and between 50% and 100% of ownership control) and the specific agri – food sector (agriculture, food and beverage). Moreover we add a categorical variable representing different forms of ownership structure (public limited company, limited liability company, cooperative) that is the core of our study. All those variables are fully analysed by a preliminary descriptive analysis. As in many previous contributions we apply a panel least squares analysis for 199 Italian firms in the period 1998 – 2007 with data taken from the Bureau Van Dijck Dataset. We apply two different models in which the dependant variables are respectively the Return on Investments (ROI) and the Return on Sales (ROS) indicators. Not surprisingly we find that companies located in the North Italy representing the richest area in Italy perform better than the ones located in the Centre and South of Italy. In contrast with the Modigliani - Miller theorem financial variables could be significant and the specific sector within the agri – food market could play a relevant role. As the power concentration, we find that a strong property control (higher than 50%) or a fragmented concentration (lower than 25%) perform better. This result apparently could suggest that “hybrid” forms of concentrations could create bad functioning in the decision process. As our key variables representing the ownership structure we find that public limited companies and limited liability companies perform better than cooperatives. This is easily explainable by the fact that law establishes that cooperatives are less profit – oriented. Beyond cooperatives public limited companies perform better than limited liability companies and show a more stable path over time. Results are quite consistent when we consider both ROI and ROS as dependant variables. These results should not lead us to claim that public limited company is the “best” among all possible governance structures. First, every governance solution should be considered according to specific situations. Second more robustness analyses are needed to confirm our results. At this stage we deem these findings, the model set up and our approach represent original contributions that could stimulate fruitful future studies aimed at investigating the intriguing issue concerning the effect of ownership structure on the performance levels.
Resumo:
In this thesis we address a collection of Network Design problems which are strongly motivated by applications from Telecommunications, Logistics and Bioinformatics. In most cases we justify the need of taking into account uncertainty in some of the problem parameters, and different Robust optimization models are used to hedge against it. Mixed integer linear programming formulations along with sophisticated algorithmic frameworks are designed, implemented and rigorously assessed for the majority of the studied problems. The obtained results yield the following observations: (i) relevant real problems can be effectively represented as (discrete) optimization problems within the framework of network design; (ii) uncertainty can be appropriately incorporated into the decision process if a suitable robust optimization model is considered; (iii) optimal, or nearly optimal, solutions can be obtained for large instances if a tailored algorithm, that exploits the structure of the problem, is designed; (iv) a systematic and rigorous experimental analysis allows to understand both, the characteristics of the obtained (robust) solutions and the behavior of the proposed algorithm.
Resumo:
In many complex and dynamic domains, the ability to generate and then select the appropriate course of action is based on the decision maker's "reading" of the situation--in other words, their ability to assess the situation and predict how it will evolve over the next few seconds. Current theories regarding option generation during the situation assessment and response phases of decision making offer contrasting views on the cognitive mechanisms that support superior performance. The Recognition-Primed Decision-making model (RPD; Klein, 1989) and Take-The-First heuristic (TTF; Johnson & Raab, 2003) suggest that superior decisions are made by generating few options, and then selecting the first option as the final one. Long-Term Working Memory theory (LTWM; Ericsson & Kintsch, 1995), on the other hand, posits that skilled decision makers construct rich, detailed situation models, and that as a result, skilled performers should have the ability to generate more of the available task-relevant options. The main goal of this dissertation was to use these theories about option generation as a way to further the understanding of how police officers anticipate a perpetrator's actions, and make decisions about how to respond, during dynamic law enforcement situations. An additional goal was to gather information that can be used, in the future, to design training based on the anticipation skills, decision strategies, and processes of experienced officers. Two studies were conducted to achieve these goals. Study 1 identified video-based law enforcement scenarios that could be used to discriminate between experienced and less-experienced police officers, in terms of their ability to anticipate the outcome. The discriminating scenarios were used as the stimuli in Study 2; 23 experienced and 26 less-experienced police officers observed temporally-occluded versions of the scenarios, and then completed assessment and response option-generation tasks. The results provided mixed support for the nature of option generation in these situations. Consistent with RPD and TTF, participants typically selected the first-generated option as their final one, and did so during both the assessment and response phases of decision making. Consistent with LTWM theory, participants--regardless of experience level--generated more task-relevant assessment options than task-irrelevant options. However, an expected interaction between experience level and option-relevance was not observed. Collectively, the two studies provide a deeper understanding of how police officers make decisions in dynamic situations. The methods developed and employed in the studies can be used to investigate anticipation and decision making in other critical domains (e.g., nursing, military). The results are discussed in relation to how they can inform future studies of option-generation performance, and how they could be applied to develop training for law enforcement officers.
Resumo:
The aim of our study was to develop a modeling framework suitable to quantify the incidence, absolute number and economic impact of osteoporosis-attributable hip, vertebral and distal forearm fractures, with a particular focus on change over time, and with application to the situation in Switzerland from 2000 to 2020. A Markov process model was developed and analyzed by Monte Carlo simulation. A demographic scenario provided by the Swiss Federal Statistical Office and various Swiss and international data sources were used as model inputs. Demographic and epidemiologic input parameters were reproduced correctly, confirming the internal validity of the model. The proportion of the Swiss population aged 50 years or over will rise from 33.3% in 2000 to 41.3% in 2020. At the total population level, osteoporosis-attributable incidence will rise from 1.16 to 1.54 per 1,000 person-years in the case of hip fracture, from 3.28 to 4.18 per 1,000 person-years in the case of radiographic vertebral fracture, and from 0.59 to 0.70 per 1,000 person-years in the case of distal forearm fracture. Osteoporosis-attributable hip fracture numbers will rise from 8,375 to 11,353, vertebral fracture numbers will rise from 23,584 to 30,883, and distal forearm fracture numbers will rise from 4,209 to 5,186. Population-level osteoporosis-related direct medical inpatient costs per year will rise from 713.4 million Swiss francs (CHF) to CHF946.2 million. These figures correspond to 1.6% and 2.2% of Swiss health care expenditures in 2000. The modeling framework described can be applied to a wide variety of settings. It can be used to assess the impact of new prevention, diagnostic and treatment strategies. In Switzerland incidences of osteoporotic hip, vertebral and distal forearm fracture will rise by 33%, 27%, and 19%, respectively, between 2000 and 2020, if current prevention and treatment patterns are maintained. Corresponding absolute fracture numbers will rise by 36%, 31%, and 23%. Related direct medical inpatient costs are predicted to increase by 33%; however, this estimate is subject to uncertainty due to limited availability of input data.
Resumo:
QUESTION UNDER STUDY The aim of this study was to evaluate the cost-effectiveness of ticagrelor and generic clopidogrel as add-on therapy to acetylsalicylic acid (ASA) in patients with acute coronary syndrome (ACS), from a Swiss perspective. METHODS Based on the PLATelet inhibition and patient Outcomes (PLATO) trial, one-year mean healthcare costs per patient treated with ticagrelor or generic clopidogrel were analysed from a payer perspective in 2011. A two-part decision-analytic model estimated treatment costs, quality-adjusted life years (QALYs), life years and the cost-effectiveness of ticagrelor and generic clopidogrel in patients with ACS up to a lifetime at a discount of 2.5% per annum. Sensitivity analyses were performed. RESULTS Over a patient's lifetime, treatment with ticagrelor generates an additional 0.1694 QALYs and 0.1999 life years at a cost of CHF 260 compared with generic clopidogrel. This results in an Incremental Cost Effectiveness Ratio (ICER) of CHF 1,536 per QALY and CHF 1,301 per life year gained. Ticagrelor dominated generic clopidogrel over the five-year and one-year periods with treatment generating cost savings of CHF 224 and 372 while gaining 0.0461 and 0.0051 QALYs and moreover 0.0517 and 0.0062 life years, respectively. Univariate sensitivity analyses confirmed the dominant position of ticagrelor in the first five years and probabilistic sensitivity analyses showed a high probability of cost-effectiveness over a lifetime. CONCLUSION During the first five years after ACS, treatment with ticagrelor dominates generic clopidogrel in Switzerland. Over a patient's lifetime, ticagrelor is highly cost-effective compared with generic clopidogrel, proven by ICERs significantly below commonly accepted willingness-to-pay thresholds.
Explaining Emergence and Consequences of Specific Formal Controls in IS Outsourcing – A Process-View
Resumo:
IS outsourcing projects often fail to achieve project goals. To inhibit this failure, managers need to design formal controls that are tailored to the specific contextual demands. However, the dynamic and uncertain nature of IS outsourcing projects makes the design of such specific formal controls at the outset of a project challenging. Hence, the process of translating high-level project goals into specific formal controls becomes crucial for success or failure of IS outsourcing projects. Based on a comparative case study of four IS outsourcing projects, our study enhances current understanding of such translation processes and their consequences by developing a process model that explains the success or failure to achieve high-level project goals as an outcome of two unique translation patterns. This novel process-based explanation for how and why IS outsourcing projects succeed or fail has important implications for control theory and IS project escalation literature.