76 resultados para value-selling
em Consorci de Serveis Universitaris de Catalunya (CSUC), Spain
Resumo:
Self-reported home values are widely used as a measure of housing wealth by researchers employing a variety of data sets and studying a number of different individual and household level decisions. The accuracy of this measure is an open empirical question, and requires some type of market assessment of the values reported. In this research, we study the predictive power of self-reported housing wealth when estimating sales prices utilizing the Health and Retirement Study. We find that homeowners, on average, overestimate the value of their properties by between 5% and 10%. More importantly, we are the first to document a strong correlation between accuracy and the economic conditions at the time of the purchase of the property (measured by the prevalent interest rate, the growth of household income, and the growth of median housing prices). While most individuals overestimate the value of their properties, those who bought during more difficult economic times tend to be more accurate, and in some cases even underestimate the value of their house. These results establish a surprisingly strong, likely permanent, and in many cases long-lived, effect of the initial conditions surrounding the purchases of properties, on how individuals value them. This cyclicality of the overestimation of house prices can provide some explanations for the difficulties currently faced by many homeowners, who were expecting large appreciations in home value to rescue them in case of increases in interest rates which could jeopardize their ability to live up to their financial commitments.
Resumo:
Does shareholder value orientation lead to shareholder value creation? This article proposes methods to quantify both, shareholder value orientation and shareholder value creation. Through the application of these models it is possible to quantify both dimensions and examine statistically in how far shareholder value orientation explains shareholder value creation. The scoring model developed in this paper allows quantifying the orientation of managers towards the objective to maximize wealth of shareholders. The method evaluates information that comes from the companies and scores the value orientation in a scale from 0 to 10 points. Analytically the variable value orientation is operationalized expressing it as the general attitude of managers toward the objective of value creation, investment policy and behavior, flexibility and further eight value drivers. The value creation model works with market data such as stock prices and dividend payments. Both methods where applied to a sample of 38 blue chip companies: 32 firms belonged to the share index IBEX 35 on July 1st, 1999, one company represents the “new economy” listed in the Spanish New Market as per July 1st, 2001, and 5 European multinational groups formed part of the EuroStoxx 50 index also on July 1st, 2001. The research period comprised the financial years 1998, 1999, and 2000. A regression analysis showed that between 15.9% and 23.4% of shareholder value creation can be explained by shareholder value orientation.
Resumo:
We propose a simple mechanism that implements the Ordinal Shapley Value (Pérez-Castrillo and Wettstein [2005]) for economies with three or less agents.
Resumo:
We propose a new solution concept to address the problem of sharing a surplus among the agents generating it. The problem is formulated in the preferences-endowments space. The solution is defined recursively, incorporating notions of consistency and fairness and relying on properties satisfied by the Shapley value for Transferable Utility (TU) games. We show a solution exists, and call it the Ordinal Shapley value (OSV). We characterize the OSV using the notion of coalitional dividends, and furthermore show it is monotone and anonymous. Finally, similarly to the weighted Shapely value for TU games, we construct a weighted OSV as well.
Resumo:
We propose a new solution concept to address the problem of sharing a surplus among the agents generating it. The sharing problem is formulated in the preferences-endowments space. The solution is defined in a recursive manner incorporating notions of consistency and fairness and relying on properties satisfied by the Shapley value for Transferable Utility (TU) games. We show a solution exists, and refer to it as an Ordinal Shapley value (OSV). The OSV associates with each problem an allocation as well as a matrix of concessions ``measuring'' the gains each agent foregoes in favor of the other agents. We analyze the structure of the concessions, and show they are unique and symmetric. Next we characterize the OSV using the notion of coalitional dividends, and furthermore show it is monotone in an agent's initial endowments and satisfies anonymity. Finally, similarly to the weighted Shapley value for TU games, we construct a weighted OSV as well.
Resumo:
This paper presents value added estimates for the Italian regions, in benchmark years from 1891 until 1951, which are linked to those from official figures available from 1971 in order to offer a long-term picture. Sources and methodology are documented and discussed, whilst regional activity rates and productivity are also presented and compared. Thus some questions are briefly reconsidered: the origins and extent of the north-south divide, the role of migration and regional policy in shaping the pattern of regional inequality, the importance of social capital, and the positioning of Italy in the international debate on regional convergence, where it stands out for the long run persistence of its disparities.
Resumo:
The paper develops a stability theory for the optimal value and the optimal set mapping of optimization problems posed in a Banach space. The problems considered in this paper have an arbitrary number of inequality constraints involving lower semicontinuous (not necessarily convex) functions and one closed abstract constraint set. The considered perturbations lead to problems of the same type as the nominal one (with the same space of variables and the same number of constraints), where the abstract constraint set can also be perturbed. The spaces of functions involved in the problems (objective and constraints) are equipped with the metric of the uniform convergence on the bounded sets, meanwhile in the space of closed sets we consider, coherently, the Attouch-Wets topology. The paper examines, in a unified way, the lower and upper semicontinuity of the optimal value function, and the closedness, lower and upper semicontinuity (in the sense of Berge) of the optimal set mapping. This paper can be seen as a second part of the stability theory presented in [17], where we studied the stability of the feasible set mapping (completed here with the analysis of the Lipschitz-like property).
Resumo:
The purpose of this paper is to provide a comparative analysis of pork value chains in Catalonia, Spain and Manitoba, Canada. Intensive hog production models were implemented in Catalonia in the 1960s as a result of agriculture crises and fostered by feedstuffs factories. The expansion of the hog sector in Manitoba is more recent (in the 1990s) and brought about in large part by the opening of the Maple Leaf Meats processing plant in Brandon, Manitoba. This plant is capable of processing 90,000 hogs per week. Both hog production models ‐ the ‘older’ one in Catalonia (Spain) and the ‘newer’ in Manitoba‐ have been, until recently, examples of success. Inventories and production have been increasing substantially and both regions have proven to have great export potential. Recently, however, tensions have been developing with the hog production models of both regions, particularly as they relate to environmental concerns. The purpose of the paper is to compare the value chains with respect to their origins (e.g. supply a growing demand for pork, ensure farm profitability) and present states (e.g. environmental concerns, profitability). Keywords: pork value chain, hog farms, agri‐food studies. JEL: Q10, Q13, O57
Resumo:
OER-based learning has the potential to overcome many shortcomings and problems of traditional education. It is not hampered by IP restrictions; can depend on collaborative, cumulative, iterative refinement of resources; and the digital form provides unprecedented flexibility with respect to configuration and delivery. The OER community is a progressive group of educators and learners with decades of learning research to draw from, who know that we must prepare learners for an evolving and diverse reality. Despite this OER tends to replicate the unsuccessful characteristics of traditional education. To remedy this we may need to remember the importance of imperfection, mistakes, problems, disagreement, and the incomplete for engaged learning, and relinquish our notions of perfection, acknowledging that learners learn differently and we need diverse learners. We must stretch our perceptions of quality and provide mechanisms for engaging the incredible pool of educators globally to fulfill the promise of inclusive education.
Resumo:
With this final master thesis we are going to contribute to the Asterisk open source project. Asterisk is an open source project that started with the main objective of develop an IP telephony platform, completely based on Software (so not hardware dependent) and under an open license like GPL. This project was started on 1999 by the software engineer Mark Spencer at Digium. The main motivation of that open source project was that the telecommunications sector is lack of open solutions, and most of the available solutions are based on proprietary standards, which are close and not compatible between them. Behind the Asterisk project there is a company, Digum, which is the project leading since the project was originated in its laboratories. This company has some of its employees fully dedicated to contribute to the Asterisk project, and also provide the whole infrastructure required by the open source project. But the business of Digium isn't based on licensing of products due to the open source nature of Asterisk, but it's based on offering services around Asteriskand designing and selling some hardware components to be used with Asterisk. The Asterisk project has grown up a lot since its birth, offering in its latest versions advanced functionalities for managing calls and compatibility with some hardware that previously was exclusive of proprietary solutions. Due to that, Asterisk is becoming a serious alternative to all these proprietaries solutions because it has reached a level of maturity that makes it very stable. In addition, as it is open source, it can be fully customized to a givenrequirement, which could be impossible with the proprietaries solutions. Due to the bigness that is reaching the project, every day there are more companies which develop value added software for telephony platforms, that are seriously evaluating the option of make their software fully compatible withAsterisk platforms. All these factors make Asterisk being a consolidated project but in constant evolution, trying to offer all those functionalities offered by proprietaries solutions. This final master thesis will be divided mainly in two blocks totally complementaries. In the first block we will analyze Asterisk as an open source project and Asterisk as a telephony platform (PBX). As a result of this analysis we will generate a document, written in English because it is Asterisk project's official language, which could be used by future contributors as an starting point on joining Asterisk. On the second block we will proceed with a development contribution to the Asterisk project. We will have several options in the form that we do the contribution, such as solving bugs, developing new functionalities or start an Asterisk satellite project. The type of contribution will depend on the needs of the project on that moment.
Resumo:
This paper presents an application of the Multi-Scale Integrated Analysis of Societal and Ecosystem Metabolism (MuSIASEM) approach to the estimation of quantities of Gross Value Added (GVA) referring to economic entities defined at different scales of study. The method first estimates benchmark values of the pace of GVA generation per hour of labour across economic sectors. These values are estimated as intensive variables –e.g. €/hour– by dividing the various sectorial GVA of the country (expressed in € per year) by the hours of paid work in that same sector per year. This assessment is obtained using data referring to national statistics (top down information referring to the national level). Then, the approach uses bottom-up information (the number of hours of paid work in the various economic sectors of an economic entity –e.g. a city or a province– operating within the country) to estimate the amount of GVA produced by that entity. This estimate is obtained by multiplying the number of hours of work in each sector in the economic entity by the benchmark value of GVA generation per hour of work of that particular sector (national average). This method is applied and tested on two different socio-economic systems: (i) Catalonia (considered level n) and Barcelona (considered level n-1); and (ii) the region of Lima (considered level n) and Lima Metropolitan Area (considered level n-1). In both cases, the GVA per year of the local economic entity –Barcelona and Lima Metropolitan Area – is estimated and the resulting value is compared with GVA data provided by statistical offices. The empirical analysis seems to validate the approach, even though the case of Lima Metropolitan Area indicates a need for additional care when dealing with the estimate of GVA in primary sectors (agriculture and mining).
Resumo:
A method to estimate an extreme quantile that requires no distributional assumptions is presented. The approach is based on transformed kernel estimation of the cumulative distribution function (cdf). The proposed method consists of a double transformation kernel estimation. We derive optimal bandwidth selection methods that have a direct expression for the smoothing parameter. The bandwidth can accommodate to the given quantile level. The procedure is useful for large data sets and improves quantile estimation compared to other methods in heavy tailed distributions. Implementation is straightforward and R programs are available.