13 resultados para Probabilistic decision process model

em AMS Tesi di Dottorato - Alm@DL - Universit


Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this research project, I have integrated two research streams on international strategic decisions making in international firms: upper echelons or top management teams (TMT) internationalization research and international strategic decision making process research. Both research streams in international business literature have evolved independently, but there is a potential in combining these two streams of research. The first empirical paper “TMT internationalization and international strategic decision making process: a decision level analysis of rationality, speed, and performance” explores the influence of TMT internationalization on strategic decision rationality and speed and, subsequently, their effect on international strategic decision effectiveness (performance). The results show that the internationalization of TMT is positively related to decision effectiveness and this relationship is mediated by decision rationality while the hypotheses regarding the association between TMT internationalization and decision speed, and the mediating effect of speed were not supported. The second paper “TMT internationalization and international strategic decision rationality: the mediating role of international information” of my thesis is a simple but logical extension of first paper. The first paper showed that TMT Internationalization has a significant positive effect on international strategic decision rationality. The second paper explicitly showed that TMT internationalization affect on international strategic decision rationality comes from two sources: international experience (personal international knowledge and information) and international information collected from managerial international contacts. For this research project, I have collected data from international software firms in Pakistan. My research contributes to the literature on upper echelons theory and strategic decision making in context of international business and international firms by explicitly examining the link between TMT internationalization and characteristics of strategic decisions making process (i.e. rationality and speed) in international firms and their possible mediating effect on performance.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Asset Management (AM) is a set of procedures operable at the strategic-tacticaloperational level, for the management of the physical asset’s performance, associated risks and costs within its whole life-cycle. AM combines the engineering, managerial and informatics points of view. In addition to internal drivers, AM is driven by the demands of customers (social pull) and regulators (environmental mandates and economic considerations). AM can follow either a top-down or a bottom-up approach. Considering rehabilitation planning at the bottom-up level, the main issue would be to rehabilitate the right pipe at the right time with the right technique. Finding the right pipe may be possible and practicable, but determining the timeliness of the rehabilitation and the choice of the techniques adopted to rehabilitate is a bit abstruse. It is a truism that rehabilitating an asset too early is unwise, just as doing it late may have entailed extra expenses en route, in addition to the cost of the exercise of rehabilitation per se. One is confronted with a typical ‘Hamlet-isque dilemma’ – ‘to repair or not to repair’; or put in another way, ‘to replace or not to replace’. The decision in this case is governed by three factors, not necessarily interrelated – quality of customer service, costs and budget in the life cycle of the asset in question. The goal of replacement planning is to find the juncture in the asset’s life cycle where the cost of replacement is balanced by the rising maintenance costs and the declining level of service. System maintenance aims at improving performance and maintaining the asset in good working condition for as long as possible. Effective planning is used to target maintenance activities to meet these goals and minimize costly exigencies. The main objective of this dissertation is to develop a process-model for asset replacement planning. The aim of the model is to determine the optimal pipe replacement year by comparing, temporally, the annual operating and maintenance costs of the existing asset and the annuity of the investment in a new equivalent pipe, at the best market price. It is proposed that risk cost provide an appropriate framework to decide the balance between investment for replacing or operational expenditures for maintaining an asset. The model describes a practical approach to estimate when an asset should be replaced. A comprehensive list of criteria to be considered is outlined, the main criteria being a visà- vis between maintenance and replacement expenditures. The costs to maintain the assets should be described by a cost function related to the asset type, the risks to the safety of people and property owing to declining condition of asset, and the predicted frequency of failures. The cost functions reflect the condition of the existing asset at the time the decision to maintain or replace is taken: age, level of deterioration, risk of failure. The process model is applied in the wastewater network of Oslo, the capital city of Norway, and uses available real-world information to forecast life-cycle costs of maintenance and rehabilitation strategies and support infrastructure management decisions. The case study provides an insight into the various definitions of ‘asset lifetime’ – service life, economic life and physical life. The results recommend that one common value for lifetime should not be applied to the all the pipelines in the stock for investment planning in the long-term period; rather it would be wiser to define different values for different cohorts of pipelines to reduce the uncertainties associated with generalisations for simplification. It is envisaged that more criteria the municipality is able to include, to estimate maintenance costs for the existing assets, the more precise will the estimation of the expected service life be. The ability to include social costs enables to compute the asset life, not only based on its physical characterisation, but also on the sensitivity of network areas to social impact of failures. The type of economic analysis is very sensitive to model parameters that are difficult to determine accurately. The main value of this approach is the effort to demonstrate that it is possible to include, in decision-making, factors as the cost of the risk associated with a decline in level of performance, the level of this deterioration and the asset’s depreciation rate, without looking at age as the sole criterion for making decisions regarding replacements.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This research has been triggered by an emergent trend in customer behavior: customers have rapidly expanded their channel experiences and preferences beyond traditional channels (such as stores) and they expect the company with which they do business to have a presence on all these channels. This evidence has produced an increasing interest in multichannel customer behavior and it has motivated several researchers to study the customers’ channel choices dynamics in multichannel environment. We study how the consumer decision process for channel choice and response to marketing communications evolves for a cohort of new customers. We assume a newly acquired customer’s decisions are described by a “trial” model, but the customer’s choice process evolves to a “post-trial” model as the customer learns his or her preferences and becomes familiar with the firm’s marketing efforts. The trial and post-trial decision processes are each described by different multinomial logit choice models, and the evolution from the trial to post-trial model is determined by a customer-level geometric distribution that captures the time it takes for the customer to make the transition. We utilize data for a major retailer who sells in three channels – retail store, the Internet, and via catalog. The model is estimated using Bayesian methods that allow for cross-customer heterogeneity. This allows us to have distinct parameters estimates for a trial and an after trial stages and to estimate the quickness of this transit at the individual level. The results show for example that the customer decision process indeed does evolve over time. Customers differ in the duration of the trial period and marketing has a different impact on channel choice in the trial and post-trial stages. Furthermore, we show that some people switch channel decision processes while others don’t and we found that several factors have an impact on the probability to switch decision process. Insights from this study can help managers tailor their marketing communication strategy as customers gain channel choice experience. Managers may also have insights on the timing of the direct marketing communications. They can predict the duration of the trial phase at individual level detecting the customers with a quick, long or even absent trial phase. They can even predict if the customer will change or not his decision process over time, and they can influence the switching process using specific marketing tools

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This thesis presents a creative and practical approach to dealing with the problem of selection bias. Selection bias may be the most important vexing problem in program evaluation or in any line of research that attempts to assert causality. Some of the greatest minds in economics and statistics have scrutinized the problem of selection bias, with the resulting approaches – Rubin’s Potential Outcome Approach(Rosenbaum and Rubin,1983; Rubin, 1991,2001,2004) or Heckman’s Selection model (Heckman, 1979) – being widely accepted and used as the best fixes. These solutions to the bias that arises in particular from self selection are imperfect, and many researchers, when feasible, reserve their strongest causal inference for data from experimental rather than observational studies. The innovative aspect of this thesis is to propose a data transformation that allows measuring and testing in an automatic and multivariate way the presence of selection bias. The approach involves the construction of a multi-dimensional conditional space of the X matrix in which the bias associated with the treatment assignment has been eliminated. Specifically, we propose the use of a partial dependence analysis of the X-space as a tool for investigating the dependence relationship between a set of observable pre-treatment categorical covariates X and a treatment indicator variable T, in order to obtain a measure of bias according to their dependence structure. The measure of selection bias is then expressed in terms of inertia due to the dependence between X and T that has been eliminated. Given the measure of selection bias, we propose a multivariate test of imbalance in order to check if the detected bias is significant, by using the asymptotical distribution of inertia due to T (Estadella et al. 2005) , and by preserving the multivariate nature of data. Further, we propose the use of a clustering procedure as a tool to find groups of comparable units on which estimate local causal effects, and the use of the multivariate test of imbalance as a stopping rule in choosing the best cluster solution set. The method is non parametric, it does not call for modeling the data, based on some underlying theory or assumption about the selection process, but instead it calls for using the existing variability within the data and letting the data to speak. The idea of proposing this multivariate approach to measure selection bias and test balance comes from the consideration that in applied research all aspects of multivariate balance, not represented in the univariate variable- by-variable summaries, are ignored. The first part contains an introduction to evaluation methods as part of public and private decision process and a review of the literature of evaluation methods. The attention is focused on Rubin Potential Outcome Approach, matching methods, and briefly on Heckman’s Selection Model. The second part focuses on some resulting limitations of conventional methods, with particular attention to the problem of how testing in the correct way balancing. The third part contains the original contribution proposed , a simulation study that allows to check the performance of the method for a given dependence setting and an application to a real data set. Finally, we discuss, conclude and explain our future perspectives.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The recent default of important Italian agri-business companies provides a challenging issue to be investigated through an appropriate scientific approach. The events involving CIRIO, FERRUZZI or PARMALAT rise an important research question: what are the determinants of performance for Italian companies in the Italian agri – food sector? My aim is not to investigate all the factors that are relevant in explaining performance. Performance depends on a wide set of political, social, economic variables that are strongly interconnected and that are often very difficult to express by formal or mathematical tools. Rather, in my thesis I mainly focus on those aspects that are strictly related to the governance and ownership structure of agri – food companies representing a strand of research that has been quite neglected by previous scholars. The conceptual framework from which I move to justify the existence of a relationship between the ownership structure of a company, governance and performance is the model set up by Airoldi and Zattoni (2005). In particular the authors investigate the existence of complex relationships arising within the company and between the company and the environment that can bring different strategies and performances. They do not try to find the “best” ownership structure, rather they outline what variables are connected and how they could vary endogenously within the whole economic system. In spite of the fact that the Airoldi and Zattoni’s model highlights the existence of a relationship between ownership and structure that is crucial for the set up of the thesis the authors fail to apply quantitative analyses in order to verify the magnitude, sign and the causal direction of the impact. In order to fill this gap we start from the literature trying to investigate the determinants of performance. Even in this strand of research studies analysing the relationship between different forms of ownership and performance are still lacking. In this thesis, after a brief description of the Italian agri – food sector and after an introduction including a short explanation of the definitions of performance and ownership structure, I implement a model in which the performance level (interpreted here as Return on Investments and Return on Sales) is related to variables that have been previously identified by the literature as important such as the financial variables (cash and leverage indices), the firm location (North Italy, Centre Italy, South Italy), the power concentration (lower than 25%, between 25% and 50% and between 50% and 100% of ownership control) and the specific agri – food sector (agriculture, food and beverage). Moreover we add a categorical variable representing different forms of ownership structure (public limited company, limited liability company, cooperative) that is the core of our study. All those variables are fully analysed by a preliminary descriptive analysis. As in many previous contributions we apply a panel least squares analysis for 199 Italian firms in the period 1998 – 2007 with data taken from the Bureau Van Dijck Dataset. We apply two different models in which the dependant variables are respectively the Return on Investments (ROI) and the Return on Sales (ROS) indicators. Not surprisingly we find that companies located in the North Italy representing the richest area in Italy perform better than the ones located in the Centre and South of Italy. In contrast with the Modigliani - Miller theorem financial variables could be significant and the specific sector within the agri – food market could play a relevant role. As the power concentration, we find that a strong property control (higher than 50%) or a fragmented concentration (lower than 25%) perform better. This result apparently could suggest that “hybrid” forms of concentrations could create bad functioning in the decision process. As our key variables representing the ownership structure we find that public limited companies and limited liability companies perform better than cooperatives. This is easily explainable by the fact that law establishes that cooperatives are less profit – oriented. Beyond cooperatives public limited companies perform better than limited liability companies and show a more stable path over time. Results are quite consistent when we consider both ROI and ROS as dependant variables. These results should not lead us to claim that public limited company is the “best” among all possible governance structures. First, every governance solution should be considered according to specific situations. Second more robustness analyses are needed to confirm our results. At this stage we deem these findings, the model set up and our approach represent original contributions that could stimulate fruitful future studies aimed at investigating the intriguing issue concerning the effect of ownership structure on the performance levels.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this thesis we address a collection of Network Design problems which are strongly motivated by applications from Telecommunications, Logistics and Bioinformatics. In most cases we justify the need of taking into account uncertainty in some of the problem parameters, and different Robust optimization models are used to hedge against it. Mixed integer linear programming formulations along with sophisticated algorithmic frameworks are designed, implemented and rigorously assessed for the majority of the studied problems. The obtained results yield the following observations: (i) relevant real problems can be effectively represented as (discrete) optimization problems within the framework of network design; (ii) uncertainty can be appropriately incorporated into the decision process if a suitable robust optimization model is considered; (iii) optimal, or nearly optimal, solutions can be obtained for large instances if a tailored algorithm, that exploits the structure of the problem, is designed; (iv) a systematic and rigorous experimental analysis allows to understand both, the characteristics of the obtained (robust) solutions and the behavior of the proposed algorithm.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The control of a proton exchange membrane fuel cell system (PEM FC) for domestic heat and power supply requires extensive control measures to handle the complicated process. Highly dynamic and non linear behavior, increase drastically the difficulties to find the optimal design and control strategies. The objective is to design, implement and commission a controller for the entire fuel cell system. The fuel cell process and the control system are engineered simultaneously; therefore there is no access to the process hardware during the control system development. Therefore the method of choice was a model based design approach, following the rapid control prototyping (RCP) methodology. The fuel cell system is simulated using a fuel cell library which allowed thermodynamic calculations. In the course of the development the process model is continuously adapted to the real system. The controller application is designed and developed in parallel and thereby tested and verified against the process model. Furthermore, after the commissioning of the real system, the process model can be also better identified and parameterized utilizing measurement data to perform optimization procedures. The process model and the controller application are implemented in Simulink using Mathworks` Real Time Workshop (RTW) and the xPC development suite for MiL (model-in-theloop) and HiL (hardware-in-the-loop) testing. It is possible to completely develop, verify and validate the controller application without depending on the real fuel cell system, which is not available for testing during the development process. The fuel cell system can be immediately taken into operation after connecting the controller to the process.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The primary aim of this dissertation to identify subgroups of patients with chronic kidney disease (CKD) who have a differential risk of progression of illness and the secondary aim is compare 2 equations to estimate the glomerular filtration rate (GFR). To this purpose, the PIRP (Prevention of Progressive Kidney Disease) registry was linked with the dialysis and mortality registries. The outcome of interest is the mean annual variation of GFR, estimated using the Chronic Kidney Disease Epidemiology Collaboration (CKD-EPI) equation. A decision tree model was used to subtype CKD patients, based on the non-parametric procedure CHAID (Chi-squared Automatic Interaction Detector). The independent variables of the model include gender, age, diabetes, hypertension, cardiac diseases, body mass index, baseline serum creatinine, haemoglobin, proteinuria, LDL cholesterol, tryglycerides, serum phoshates, glycemia, parathyroid hormone and uricemia. The decision tree model classified patients into 10 terminal nodes using 6 variables (gender, age, proteinuria, diabetes, serum phosphates and ischemic cardiac disease) that predict a differential progression of kidney disease. Specifically, age <=53 year, male gender, proteinuria, diabetes and serum phosphates >3.70 mg/dl predict a faster decrease of GFR, while ischemic cardiac disease predicts a slower decrease. The comparison between GFR estimates obtained using MDRD4 and CKD-EPI equations shows a high percentage agreement (>90%), with modest discrepancies for high and low age and serum creatinine levels. The study results underscore the need for a tight follow-up schedule in patients with age <53, and of patients aged 54 to 67 with diabetes, to try to slow down the progression of the disease. The result also emphasize the effective management of patients aged>67, in whom the estimated decrease in glomerular filtration rate corresponds with the physiological decrease observed in the absence of kidney disease, except for the subgroup of patients with proteinuria, in whom the GFR decline is more pronounced.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

With an increasing demand for rural resources and land, new challenges are approaching affecting and restructuring the European countryside. While creating opportunities for rural living, it has also opened a discussion on rural gentrification risks. The concept of rural gentrification encircles the influx of new residents leading to an economic upgrade of an area making it unaffordable for local inhabitants to stay in. Rural gentrification occurs in areas perceived as attractive. Paradoxically, in-migrants re-shape their surrounding landscape. Rural gentrification may not only cause displacement of people but also landscape values. Thus, this research aims to understand the twofold role of landscape in rural gentrification theory: as a possible driver to attract residents and as a product shaped by its residents. To understand the potential gentrifiers’ decision process, this research has provided a collection of drivers behind in-migration. Moreover, essential indicators of rural gentrification have been collected from previous studies. Yet, the available indicators do not contain measures to understand related landscape changes. To fill this gap, after analysing established landscape assessment methodologies, evaluating the relevance for assessing gentrification, a new Landscape Assessment approach is proposed. This method introduces a novel approach to capture landscape change caused by gentrification through a historical depth. The measures to study gentrification was applied on Gotland, Sweden. The study showed a population stagnating while the number of properties increased, and housing prices raised. These factors are not indicating positive growth but risks of gentrification. Then, the research applied the proposed Landscape Assessment method for areas exposed to gentrification. Results suggest that landscape change takes place on a local scale and could over time endanger key characteristics. The methodology contributes to a discussion on grasping nuances within the rural context. It has also proven useful for indicating accumulative changes, which is necessary in managing landscape values.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Over the last century, mathematical optimization has become a prominent tool for decision making. Its systematic application in practical fields such as economics, logistics or defense led to the development of algorithmic methods with ever increasing efficiency. Indeed, for a variety of real-world problems, finding an optimal decision among a set of (implicitly or explicitly) predefined alternatives has become conceivable in reasonable time. In the last decades, however, the research community raised more and more attention to the role of uncertainty in the optimization process. In particular, one may question the notion of optimality, and even feasibility, when studying decision problems with unknown or imprecise input parameters. This concern is even more critical in a world becoming more and more complex —by which we intend, interconnected —where each individual variation inside a system inevitably causes other variations in the system itself. In this dissertation, we study a class of optimization problems which suffer from imprecise input data and feature a two-stage decision process, i.e., where decisions are made in a sequential order —called stages —and where unknown parameters are revealed throughout the stages. The applications of such problems are plethora in practical fields such as, e.g., facility location problems with uncertain demands, transportation problems with uncertain costs or scheduling under uncertain processing times. The uncertainty is dealt with a robust optimization (RO) viewpoint (also known as "worst-case perspective") and we present original contributions to the RO literature on both the theoretical and practical side.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The presented study carried out an analysis on rural landscape changes. In particular the study focuses on the understanding of driving forces acting on the rural built environment using a statistical spatial model implemented through GIS techniques. It is well known that the study of landscape changes is essential for a conscious decision making in land planning. From a bibliography review results a general lack of studies dealing with the modeling of rural built environment and hence a theoretical modelling approach for such purpose is needed. The advancement in technology and modernity in building construction and agriculture have gradually changed the rural built environment. In addition, the phenomenon of urbanization of a determined the construction of new volumes that occurred beside abandoned or derelict rural buildings. Consequently there are two types of transformation dynamics affecting mainly the rural built environment that can be observed: the conversion of rural buildings and the increasing of building numbers. It is the specific aim of the presented study to propose a methodology for the development of a spatial model that allows the identification of driving forces that acted on the behaviours of the building allocation. In fact one of the most concerning dynamic nowadays is related to an irrational expansion of buildings sprawl across landscape. The proposed methodology is composed by some conceptual steps that cover different aspects related to the development of a spatial model: the selection of a response variable that better describe the phenomenon under study, the identification of possible driving forces, the sampling methodology concerning the collection of data, the most suitable algorithm to be adopted in relation to statistical theory and method used, the calibration process and evaluation of the model. A different combination of factors in various parts of the territory generated favourable or less favourable conditions for the building allocation and the existence of buildings represents the evidence of such optimum. Conversely the absence of buildings expresses a combination of agents which is not suitable for building allocation. Presence or absence of buildings can be adopted as indicators of such driving conditions, since they represent the expression of the action of driving forces in the land suitability sorting process. The existence of correlation between site selection and hypothetical driving forces, evaluated by means of modeling techniques, provides an evidence of which driving forces are involved in the allocation dynamic and an insight on their level of influence into the process. GIS software by means of spatial analysis tools allows to associate the concept of presence and absence with point futures generating a point process. Presence or absence of buildings at some site locations represent the expression of these driving factors interaction. In case of presences, points represent locations of real existing buildings, conversely absences represent locations were buildings are not existent and so they are generated by a stochastic mechanism. Possible driving forces are selected and the existence of a causal relationship with building allocations is assessed through a spatial model. The adoption of empirical statistical models provides a mechanism for the explanatory variable analysis and for the identification of key driving variables behind the site selection process for new building allocation. The model developed by following the methodology is applied to a case study to test the validity of the methodology. In particular the study area for the testing of the methodology is represented by the New District of Imola characterized by a prevailing agricultural production vocation and were transformation dynamic intensively occurred. The development of the model involved the identification of predictive variables (related to geomorphologic, socio-economic, structural and infrastructural systems of landscape) capable of representing the driving forces responsible for landscape changes.. The calibration of the model is carried out referring to spatial data regarding the periurban and rural area of the study area within the 1975-2005 time period by means of Generalised linear model. The resulting output from the model fit is continuous grid surface where cells assume values ranged from 0 to 1 of probability of building occurrences along the rural and periurban area of the study area. Hence the response variable assesses the changes in the rural built environment occurred in such time interval and is correlated to the selected explanatory variables by means of a generalized linear model using logistic regression. Comparing the probability map obtained from the model to the actual rural building distribution in 2005, the interpretation capability of the model can be evaluated. The proposed model can be also applied to the interpretation of trends which occurred in other study areas, and also referring to different time intervals, depending on the availability of data. The use of suitable data in terms of time, information, and spatial resolution and the costs related to data acquisition, pre-processing, and survey are among the most critical aspects of model implementation. Future in-depth studies can focus on using the proposed model to predict short/medium-range future scenarios for the rural built environment distribution in the study area. In order to predict future scenarios it is necessary to assume that the driving forces do not change and that their levels of influence within the model are not far from those assessed for the time interval used for the calibration.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Spatial prediction of hourly rainfall via radar calibration is addressed. The change of support problem (COSP), arising when the spatial supports of different data sources do not coincide, is faced in a non-Gaussian setting; in fact, hourly rainfall in Emilia-Romagna region, in Italy, is characterized by abundance of zero values and right-skeweness of the distribution of positive amounts. Rain gauge direct measurements on sparsely distributed locations and hourly cumulated radar grids are provided by the ARPA-SIMC Emilia-Romagna. We propose a three-stage Bayesian hierarchical model for radar calibration, exploiting rain gauges as reference measure. Rain probability and amounts are modeled via linear relationships with radar in the log scale; spatial correlated Gaussian effects capture the residual information. We employ a probit link for rainfall probability and Gamma distribution for rainfall positive amounts; the two steps are joined via a two-part semicontinuous model. Three model specifications differently addressing COSP are presented; in particular, a stochastic weighting of all radar pixels, driven by a latent Gaussian process defined on the grid, is employed. Estimation is performed via MCMC procedures implemented in C, linked to R software. Communication and evaluation of probabilistic, point and interval predictions is investigated. A non-randomized PIT histogram is proposed for correctly assessing calibration and coverage of two-part semicontinuous models. Predictions obtained with the different model specifications are evaluated via graphical tools (Reliability Plot, Sharpness Histogram, PIT Histogram, Brier Score Plot and Quantile Decomposition Plot), proper scoring rules (Brier Score, Continuous Rank Probability Score) and consistent scoring functions (Root Mean Square Error and Mean Absolute Error addressing the predictive mean and median, respectively). Calibration is reached and the inclusion of neighbouring information slightly improves predictions. All specifications outperform a benchmark model with incorrelated effects, confirming the relevance of spatial correlation for modeling rainfall probability and accumulation.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

How to evaluate the cost-effectiveness of repair/retrofit intervention vs. demolition/replacement and what level of shaking intensity can the chosen repairing/retrofit technique sustain are open questions affecting either the pre-earthquake prevention, the post-earthquake emergency and the reconstruction phases. The (mis)conception that the cost of retrofit interventions would increase linearly with the achieved seismic performance (%NBS) often discourages stakeholders to consider repair/retrofit options in a post-earthquake damage situation. Similarly, in a pre-earthquake phase, the minimum (by-law) level of %NBS might be targeted, leading in some cases to no-action. Furthermore, the performance measure enforcing owners to take action, the %NBS, is generally evaluated deterministically. Not directly reflecting epistemic and aleatory uncertainties, the assessment can result in misleading confidence on the expected performance. The present study aims at contributing to the delicate decision-making process of repair/retrofit vs. demolition/replacement, by developing a framework to assist stakeholders with the evaluation of the effects in terms of long-term losses and benefits of an increment in their initial investment (targeted retrofit level) and highlighting the uncertainties hidden behind a deterministic approach. For a pre-1970 case study building, different retrofit solutions are considered, targeting different levels of %NBS, and the actual probability of reaching Collapse when considering a suite of ground-motions is evaluated, providing a correlation between %NBS and Risk. Both a simplified and a probabilistic loss modelling are then undertaken to study the relationship between %NBS and expected direct and indirect losses.