963 resultados para Rischio finanziario, Value-at-Risk, Expected Shortfall


Relevância:

30.00% 30.00%

Publicador:

Resumo:

How is climate change affecting our coastal environment? How can coastal communities adapt to sea level rise and increased storm risk? These questions have garnered tremendous interest from scientists and policy makers alike, as the dynamic coastal environment is particularly vulnerable to the impacts of climate change. Over half the world population lives and works in a coastal zone less than 120 miles wide, thereby being continuously affected by the changes in the coastal environment [6]. Housing markets are directly influenced by the physical processes that govern coastal systems. Beach towns like Oak Island in North Carolina (NC) face severe erosion, and the tax assesed value of one coastal property fell by 93% in 2007 [9]. With almost ninety percent of the sandy beaches in the US facing moderate to severe erosion [8], coastal communities often intervene to stabilize the shoreline and hold back the sea in order to protect coastal property and infrastructure. Beach nourishment, which is the process of rebuilding a beach by periodically replacing an eroding section of the beach with sand dredged from another location, is a policy for erosion control in many parts of the US Atlantic and Pacific coasts [3]. Beach nourishment projects in the United States are primarily federally funded and implemented by the Army Corps of Engineers (ACE) after a benefit-cost analysis. Benefits from beach nourishment include reduction in storm damage and recreational benefits from a wider beach. Costs would include the expected cost of construction, present value of periodic maintenance, and any external cost such as the environmental cost associated with a nourishment project (NOAA). Federal appropriations for nourishment totaled $787 million from 1995 to 2002 [10]. Human interventions to stabilize shorelines and physical coastal dynamics are strongly coupled. The value of the beach, in the form of storm protection and recreation amenities, is at least partly capitalized into property values. These beach values ultimately influence the benefit-cost analysis in support of shoreline stabilization policy, which, in turn, affects the shoreline dynamics. This paper explores the policy implications of this circularity. With a better understanding of the physical-economic feedbacks, policy makers can more effectively design climate change adaptation strategies. (PDF contains 4 pages)

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Many engineering applications face the problem of bounding the expected value of a quantity of interest (performance, risk, cost, etc.) that depends on stochastic uncertainties whose probability distribution is not known exactly. Optimal uncertainty quantification (OUQ) is a framework that aims at obtaining the best bound in these situations by explicitly incorporating available information about the distribution. Unfortunately, this often leads to non-convex optimization problems that are numerically expensive to solve.

This thesis emphasizes on efficient numerical algorithms for OUQ problems. It begins by investigating several classes of OUQ problems that can be reformulated as convex optimization problems. Conditions on the objective function and information constraints under which a convex formulation exists are presented. Since the size of the optimization problem can become quite large, solutions for scaling up are also discussed. Finally, the capability of analyzing a practical system through such convex formulations is demonstrated by a numerical example of energy storage placement in power grids.

When an equivalent convex formulation is unavailable, it is possible to find a convex problem that provides a meaningful bound for the original problem, also known as a convex relaxation. As an example, the thesis investigates the setting used in Hoeffding's inequality. The naive formulation requires solving a collection of non-convex polynomial optimization problems whose number grows doubly exponentially. After structures such as symmetry are exploited, it is shown that both the number and the size of the polynomial optimization problems can be reduced significantly. Each polynomial optimization problem is then bounded by its convex relaxation using sums-of-squares. These bounds are found to be tight in all the numerical examples tested in the thesis and are significantly better than Hoeffding's bounds.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In the quest for a descriptive theory of decision-making, the rational actor model in economics imposes rather unrealistic expectations and abilities on human decision makers. The further we move from idealized scenarios, such as perfectly competitive markets, and ambitiously extend the reach of the theory to describe everyday decision making situations, the less sense these assumptions make. Behavioural economics has instead proposed models based on assumptions that are more psychologically realistic, with the aim of gaining more precision and descriptive power. Increased psychological realism, however, comes at the cost of a greater number of parameters and model complexity. Now there are a plethora of models, based on different assumptions, applicable in differing contextual settings, and selecting the right model to use tends to be an ad-hoc process. In this thesis, we develop optimal experimental design methods and evaluate different behavioral theories against evidence from lab and field experiments.

We look at evidence from controlled laboratory experiments. Subjects are presented with choices between monetary gambles or lotteries. Different decision-making theories evaluate the choices differently and would make distinct predictions about the subjects' choices. Theories whose predictions are inconsistent with the actual choices can be systematically eliminated. Behavioural theories can have multiple parameters requiring complex experimental designs with a very large number of possible choice tests. This imposes computational and economic constraints on using classical experimental design methods. We develop a methodology of adaptive tests: Bayesian Rapid Optimal Adaptive Designs (BROAD) that sequentially chooses the "most informative" test at each stage, and based on the response updates its posterior beliefs over the theories, which informs the next most informative test to run. BROAD utilizes the Equivalent Class Edge Cutting (EC2) criteria to select tests. We prove that the EC2 criteria is adaptively submodular, which allows us to prove theoretical guarantees against the Bayes-optimal testing sequence even in the presence of noisy responses. In simulated ground-truth experiments, we find that the EC2 criteria recovers the true hypotheses with significantly fewer tests than more widely used criteria such as Information Gain and Generalized Binary Search. We show, theoretically as well as experimentally, that surprisingly these popular criteria can perform poorly in the presence of noise, or subject errors. Furthermore, we use the adaptive submodular property of EC2 to implement an accelerated greedy version of BROAD which leads to orders of magnitude speedup over other methods.

We use BROAD to perform two experiments. First, we compare the main classes of theories for decision-making under risk, namely: expected value, prospect theory, constant relative risk aversion (CRRA) and moments models. Subjects are given an initial endowment, and sequentially presented choices between two lotteries, with the possibility of losses. The lotteries are selected using BROAD, and 57 subjects from Caltech and UCLA are incentivized by randomly realizing one of the lotteries chosen. Aggregate posterior probabilities over the theories show limited evidence in favour of CRRA and moments' models. Classifying the subjects into types showed that most subjects are described by prospect theory, followed by expected value. Adaptive experimental design raises the possibility that subjects could engage in strategic manipulation, i.e. subjects could mask their true preferences and choose differently in order to obtain more favourable tests in later rounds thereby increasing their payoffs. We pay close attention to this problem; strategic manipulation is ruled out since it is infeasible in practice, and also since we do not find any signatures of it in our data.

In the second experiment, we compare the main theories of time preference: exponential discounting, hyperbolic discounting, "present bias" models: quasi-hyperbolic (α, β) discounting and fixed cost discounting, and generalized-hyperbolic discounting. 40 subjects from UCLA were given choices between 2 options: a smaller but more immediate payoff versus a larger but later payoff. We found very limited evidence for present bias models and hyperbolic discounting, and most subjects were classified as generalized hyperbolic discounting types, followed by exponential discounting.

In these models the passage of time is linear. We instead consider a psychological model where the perception of time is subjective. We prove that when the biological (subjective) time is positively dependent, it gives rise to hyperbolic discounting and temporal choice inconsistency.

We also test the predictions of behavioral theories in the "wild". We pay attention to prospect theory, which emerged as the dominant theory in our lab experiments of risky choice. Loss aversion and reference dependence predicts that consumers will behave in a uniquely distinct way than the standard rational model predicts. Specifically, loss aversion predicts that when an item is being offered at a discount, the demand for it will be greater than that explained by its price elasticity. Even more importantly, when the item is no longer discounted, demand for its close substitute would increase excessively. We tested this prediction using a discrete choice model with loss-averse utility function on data from a large eCommerce retailer. Not only did we identify loss aversion, but we also found that the effect decreased with consumers' experience. We outline the policy implications that consumer loss aversion entails, and strategies for competitive pricing.

In future work, BROAD can be widely applicable for testing different behavioural models, e.g. in social preference and game theory, and in different contextual settings. Additional measurements beyond choice data, including biological measurements such as skin conductance, can be used to more rapidly eliminate hypothesis and speed up model comparison. Discrete choice models also provide a framework for testing behavioural models with field data, and encourage combined lab-field experiments.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Structural design is a decision-making process in which a wide spectrum of requirements, expectations, and concerns needs to be properly addressed. Engineering design criteria are considered together with societal and client preferences, and most of these design objectives are affected by the uncertainties surrounding a design. Therefore, realistic design frameworks must be able to handle multiple performance objectives and incorporate uncertainties from numerous sources into the process.

In this study, a multi-criteria based design framework for structural design under seismic risk is explored. The emphasis is on reliability-based performance objectives and their interaction with economic objectives. The framework has analysis, evaluation, and revision stages. In the probabilistic response analysis, seismic loading uncertainties as well as modeling uncertainties are incorporated. For evaluation, two approaches are suggested: one based on preference aggregation and the other based on socio-economics. Both implementations of the general framework are illustrated with simple but informative design examples to explore the basic features of the framework.

The first approach uses concepts similar to those found in multi-criteria decision theory, and directly combines reliability-based objectives with others. This approach is implemented in a single-stage design procedure. In the socio-economics based approach, a two-stage design procedure is recommended in which societal preferences are treated through reliability-based engineering performance measures, but emphasis is also given to economic objectives because these are especially important to the structural designer's client. A rational net asset value formulation including losses from uncertain future earthquakes is used to assess the economic performance of a design. A recently developed assembly-based vulnerability analysis is incorporated into the loss estimation.

The presented performance-based design framework allows investigation of various design issues and their impact on a structural design. It is a flexible one that readily allows incorporation of new methods and concepts in seismic hazard specification, structural analysis, and loss estimation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This work presents the basic elements for the analysis of decision under uncertainty: Expected Utility Theory and its citicisms and risk aversion and its measurement. The concepts of certainty equivalent, risk premium, absolute risk aversion and relative risk aversion, and the "more risk averse than" relation are discussed. The work is completed with several applications of decision making under uncertainty to different economic problems: investment in risky assets and portfolio selection, risk sharing, investment to reduce risk, insurance, taxes and income underreporting, deposit insurance and the value of information.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

La salud es un aspecto muy importante en la vida de cualquier persona, de forma que, al ocurrir cualquier contingencia que merma el estado de salud de un individuo o grupo de personas, se debe valorar estrictamente y en detalle las distintas alternativas destinadas a combatir la enfermedad. Esto se debe a que, la calidad de vida de los pacientes variará dependiendo de la alternativa elegida. La calidad de vida relacionada con la salud (CVRS) se entiende como el valor asignado a la duración de la vida, modificado por la oportunidad social, la percepción, el estado funcional y la disminución provocadas por una enfermedad, accidente, tratamiento o política (Sacristán et al, 1995). Para determinar el valor numérico asignado a la CVRS, ante una intervención, debemos beber de la teoría económica aplicada a las evaluaciones sanitarias para nuevas intervenciones. Entre los métodos de evaluación económica sanitaria, el método coste-utilidad emplea como utilidad, los años de vida ajustado por calidad (AVAC), que consiste, por un lado, tener en cuenta la calidad de vida ante una intervención médica, y por otro lado, los años estimados a vivir tras la intervención. Para determinar la calidad de vida, se emplea técnicas como el Juego Estándar, la Equivalencia Temporal y la Escala de Categoría. Estas técnicas nos proporcionan un valor numérico entre 0 y 1, siendo 0 el peor estado y 1 el estado perfecto de salud. Al entrevistar a un paciente a cerca de la utilidad en términos de salud, puede haber riesgo o incertidumbre en la pregunta planteada. En tal caso, se aplica el Juego Estándar con el fin de determinar el valor numérico de la utilidad o calidad de vida del paciente ante un tratamiento dado. Para obtener este valor, al paciente se le plantean dos escenarios: en primer lugar, un estado de salud con probabilidad de morir y de sobrevivir, y en segundo lugar, un estado de certeza. La utilidad se determina modificando la probabilidad de morir hasta llegar a la probabilidad que muestra la indiferencia del individuo entre el estado de riesgo y el estado de certeza. De forma similar, tenemos la equivalencia temporal, cuya aplicación resulta más fácil que el juego estándar ya que valora en un eje de ordenadas y abscisas, el valor de la salud y el tiempo a cumplir en esa situación ante un tratamiento sanitario, de forma que, se llega al valor correspondiente a la calidad de vida variando el tiempo hasta que el individuo se muestre indiferente entre las dos alternativas. En último lugar, si lo que se espera del paciente es una lista de estados de salud preferidos ante un tratamiento, empleamos la Escala de Categoría, que consiste en una línea horizontal de 10 centímetros con puntuaciones desde 0 a 100. La persona entrevistada coloca la lista de estados de salud según el orden de preferencia en la escala que después es normalizado a un intervalo entre 0 y 1. Los años de vida ajustado por calidad se obtienen multiplicando el valor de la calidad de vida por los años de vida estimados que vivirá el paciente. Sin embargo, ninguno de estas metodologías mencionadas consideran el factor edad, siendo necesario la inclusión de esta variable. Además, los pacientes pueden responder de manera subjetiva, situación en la que se requiere la opinión de un experto que determine el nivel de discapacidad del aquejado. De esta forma, se introduce el concepto de años de vida ajustado por discapacidad (AVAD) tal que el parámetro de utilidad de los AVAC será el complementario del parámetro de discapacidad de los AVAD Q^i=1-D^i. A pesar de que este último incorpora parámetros de ponderación de edad que no se contemplan en los AVAC. Además, bajo la suposición Q=1-D, podemos determinar la calidad de vida del individuo antes del tratamiento. Una vez obtenido los AVAC ganados, procedemos a la valoración monetaria de éstos. Para ello, partimos de la suposición de que la intervención sanitaria permite al individuo volver a realizar las labores que venía realizando. De modo que valoramos los salarios probables con una temporalidad igual a los AVAC ganados, teniendo en cuenta la limitación que supone la aplicación de este enfoque. Finalmente, analizamos los beneficios derivados del tratamiento (masa salarial probable) si empleamos la tabla GRF-95 (población femenina) y GRM-95 (población masculina).

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Este trabalho pretende verificar o alinhamento entre os indicadores de inovação utilizados pelos métodos Scorecard de mensuração dos ativos intangíveis e pelas agências governamentais nacionais e internacionais com aqueles utilizados pelos fundos de capital de risco na cidade do Rio de Janeiro, para investimento em empresas spin-offs acadêmicas incubadas. A metodologia constou de uma revisão bibliográfica sobre os métodos de mensuração e valoração dos ativos intangíveis, indicadores de inovação propostos por agências governamentais nacionais e internacionais e fundos de capital de risco que atuam na cidade do Rio de Janeiro. Além de serem aplicados questionários nas empresas de fundos de capital de risco desta cidade. Foram levantados diversos métodos Scorecards na literatura com seus indicadores, além dos indicadores de inovação de agências governamentais nacionais e internacionais. Adicionando-se a isso, identificou-se o foco de investimento, o processo de seleção, o método utilizado de avaliação de oportunidades de investimento e indicadores relevantes para as empresas de capital de risco da cidade do Rio de Janeiro. Observou-se que os ativos intangíveis, entre eles os de inovação, não são avaliados individualmente. A informação obtida com as empresas que receberão investimentos dessas empresas de capital de risco é utilizada para se entender a origem dos fluxos de caixa projetados e os principais fatores de risco. E esses dados aplicados ao método do fluxo de caixa descontado permitem que se estime o valor da empresa. Pela vasta experiência dos gestores dos fundos de capital de risco com micro e pequenas empresas inovadoras, espera-se que o estudo das práticas deste segmento traga importantes reflexões para as discussões relativas aos ativos intangíveis e a inovação.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

O câncer do colo do útero ocasiona 7% dos óbitos por câncer na população feminina brasileira, com taxa de incidência estimada em 15,33/100 mil, sendo a infecção pelo papilomavírus humano (HPV) causa necessária. O rastreamento com citologia do esfregaço cervical convencional é ação escolhida no Brasil e em outros países para prevenir e controlar o câncer, através da detecção precoce das lesões pré-neoplásicas. Porém, apresenta falhas na cobertura, com desigualdade de acesso, e na qualidade, devido aos problemas nas etapas operacionais, desde a coleta na unidade até a interpretação dos resultados. O rastreamento é oportunístico, e uma alternativa organizada pode ser a utilização de cadastros populacionais, como os disponíveis pela Estratégia Saúde da Família (ESF). A efetividade do rastreamento pode ser aumentada com a incorporação de novas técnicas, destacando-se aquelas de biologia molecular, que testam o HPV nas mulheres, em acréscimo ao rastreamento com citologia ou a substituindo integralmente. O conhecimento acerca dos fatores relacionados à infecção vem sendo ampliado para compreender por que mulheres que se encontram em risco semelhante de transmissão apresentam infecção e outras, não. A inexistência de um sistema de vigilância dos tipos circulantes de HPV prejudica a avaliação do cenário atual, inclusive no que se refere à recente implantação da vacinação contra o HPV no calendário básico de vacinação. Neste estudo, o objetivo foi estimar a prevalência de infecção pelo HPV no grupo de mulheres estudadas, relacionando estes achados aos fatores relacionados à infecção, que podem auxiliar na identificação de mulheres mais vulneráveis à infecção e à presença de lesões pré-neoplásicas. 2062 mulheres da periferia de Juiz de Fora, após serem convocadas para rastreamento, participaram do estudo, desenvolvido em duas unidades com a ESF, tendo respondido a um questionário padronizado, submetidas ao exame citológico cervical convencional e testadas para o HPV com o teste cobas 4800 (Roche). A adesão das mulheres convocadas foi semelhante àquela observada em estudos de outros delineamentos, os quais podem contribuir para o seu entendimento. Foram calculadas estimativas de prevalência de infecção pelo HPV segundo características selecionadas. A prevalência global de infecção pelo HPV foi 12,61%. A análise multivariada por regressão de Poisson com variância robusta mostrou associação estatisticamente significativa para ser solteira, consumir bebida alcoólica e ter três ou mais parceiros sexuais ao longo da vida, com razões de prevalência ajustadas de 1,40, 1,44 e 1,35, respectivamente. A prevalência de lesões precursoras do câncer do colo do útero foi 7,27%. A qualidade do esfregaço, avaliada pela representatividade dos epitélios coletados, mostrou que há variação da prevalência de lesões pré-neoplásicas mediante qualidade do esfregaço, porém, não há variação nos resultados do teste HPV mediante a representatividade celular, achado que consistente com uma sensibilidade maior e possivelmente um melhor valor preditivo positivo. A testagem do HPV mostrou ser útil, especialmente entre mulheres com resultado de citologia normal.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The WorldFish Center is implementing the FtF Aquaculture Project in 20 southern districts in Bangladesh. The project is implemented under USAID’s Feed the Future initiative in collaboration with the Government of Bangladesh. The project contributes to achieving the ‘Feed the Future’ goals through four objectives: (i) dissemination of improved quality fish and shrimp seed, (ii) improving the nutrition and income status of farm households, (iii) increasing investment, employment and fish production through commercial aquaculture and (iv) policy and regulatory reform and institutional capacity building to support sustainable aquaculture growth. The project commissioned this study to gather insights into the value chains of shrimp, prawn and tilapia in the project region and the feasibility of promoting culture of brackish water sea-bass in the region. The findings and recommendations are expected to provide the foundation for the project to design its interventions for achieving its goals.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We present a systematic, practical approach to developing risk prediction systems, suitable for use with large databases of medical information. An important part of this approach is a novel feature selection algorithm which uses the area under the receiver operating characteristic (ROC) curve to measure the expected discriminative power of different sets of predictor variables. We describe this algorithm and use it to select variables to predict risk of a specific adverse pregnancy outcome: failure to progress in labour. Neural network, logistic regression and hierarchical Bayesian risk prediction models are constructed, all of which achieve close to the limit of performance attainable on this prediction task. We show that better prediction performance requires more discriminative clinical information rather than improved modelling techniques. It is also shown that better diagnostic criteria in clinical records would greatly assist the development of systems to predict risk in pregnancy.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Recent advances in theoretical neuroscience suggest that motor control can be considered as a continuous decision-making process in which uncertainty plays a key role. Decision-makers can be risk-sensitive with respect to this uncertainty in that they may not only consider the average payoff of an outcome, but also consider the variability of the payoffs. Although such risk-sensitivity is a well-established phenomenon in psychology and economics, it has been much less studied in motor control. In fact, leading theories of motor control, such as optimal feedback control, assume that motor behaviors can be explained as the optimization of a given expected payoff or cost. Here we review evidence that humans exhibit risk-sensitivity in their motor behaviors, thereby demonstrating sensitivity to the variability of "motor costs." Furthermore, we discuss how risk-sensitivity can be incorporated into optimal feedback control models of motor control. We conclude that risk-sensitivity is an important concept in understanding individual motor behavior under uncertainty.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Many aspects of human motor behavior can be understood using optimality principles such as optimal feedback control. However, these proposed optimal control models are risk-neutral; that is, they are indifferent to the variability of the movement cost. Here, we propose the use of a risk-sensitive optimal controller that incorporates movement cost variance either as an added cost (risk-averse controller) or as an added value (risk-seeking controller) to model human motor behavior in the face of uncertainty. We use a sensorimotor task to test the hypothesis that subjects are risk-sensitive. Subjects controlled a virtual ball undergoing Brownian motion towards a target. Subjects were required to minimize an explicit cost, in points, that was a combination of the final positional error of the ball and the integrated control cost. By testing subjects on different levels of Brownian motion noise and relative weighting of the position and control cost, we could distinguish between risk-sensitive and risk-neutral control. We show that subjects change their movement strategy pessimistically in the face of increased uncertainty in accord with the predictions of a risk-averse optimal controller. Our results suggest that risk-sensitivity is a fundamental attribute that needs to be incorporated into optimal feedback control models.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Many aspects of human motor behavior can be understood using optimality principles such as optimal feedback control. However, these proposed optimal control models are risk-neutral; that is, they are indifferent to the variability of the movement cost. Here, we propose the use of a risk-sensitive optimal controller that incorporates movement cost variance either as an added cost (risk-averse controller) or as an added value (risk-seeking controller) to model human motor behavior in the face of uncertainty. We use a sensorimotor task to test the hypothesis that subjects are risk-sensitive. Subjects controlled a virtual ball undergoing Brownian motion towards a target. Subjects were required to minimize an explicit cost, in points, that was a combination of the final positional error of the ball and the integrated control cost. By testing subjects on different levels of Brownian motion noise and relative weighting of the position and control cost, we could distinguish between risk-sensitive and risk-neutral control. We show that subjects change their movement strategy pessimistically in the face of increased uncertainty in accord with the predictions of a risk-averse optimal controller. Our results suggest that risk-sensitivity is a fundamental attribute that needs to be incorporated into optimal feedback control models. © 2010 Nagengast et al.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The electricity sectors of many developing countries underwent substantial reforms during the 1980s and 1990s, driven by global agendas of privatization and liberalization. However, rural electrification offered little by way of market incentives for profit-seeking private companies and was often neglected. As a consequence, delivery models for rural electrification need to change. This paper will review the experiences of various rural electrification delivery models that have been established in developing countries, including concessionary models, dealership approaches and the strengthening of small and medium-sized energy businesses. It will use examples from the USA, Bangladesh and Nepal, together with a detailed case study of a Nepali rural electric cooperative, to explore the role that local cooperatives can play in extending electricity access. It is shown that although there is no magic bullet solution to deliver rural electrification, if offered appropriate financial and institutional support, socially orientated cooperative businesses can be a willing, efficient and effective means of extending and managing rural electricity services. It is expected that this paper will be of particular value to policy-makers, donors, project planners and implementers currently working in the field of rural electrification. © 2010 Elsevier Ltd.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Introducing a "Cheaper, Faster, Better" product in today's highly competitive market is a challenging target. Therefore, for organizations to improve their performance in this area, they need to adopt methods such as process modelling, risk mitigation and lean principles. Recently, several industries and researchers focused efforts on transferring the value orientation concept to other phases of the Product Life Cycle (PLC) such as Product Development (PD), after its evident success in manufacturing. In PD, value maximization, which is the main objective of lean theory, has been of particular interest as an improvement concept that can enhance process flow logistics and support decision-making. This paper presents an ongoing study of the current understanding of value thinking in PD (VPD) with a focus on value dimensions and implementation benefits. The purpose of this study is to consider the current state of knowledge regarding value thinking in PD, and to propose a definition of value and a framework for analyzing value delivery. The framework-named the Value Cycle Map (VCM)- intends to facilitate understanding of value and its delivery mechanism in the context of the PLC. We suggest the VCM could be used as a foundation for future research in value modelling and measurement in PD.