57 resultados para warranty forecasting
Resumo:
Most motor bodily injury (BI) claims are settled by negotiation, with fewer than 5% of cases going to court. A well-defined negotiation strategy is thus very useful for insurance companies. In this paper we assume that the monetary compensation awarded in court is the upper amount to be offered by the insurer in the negotiation process. Using a real database, a log-linear model is implemented to estimate the maximal offer. Non-spherical disturbances are detected. Correlation occurs when various claims are settled in the same judicial verdict. Group wise heteroscedasticity is due to the influence of the forensic valuation on the final compensation amount. An alternative approximation based on generalized inference theory is applied to estimate confidence intervals on variance components, since classical interval estimates may be unreliable for datasets with unbalanced structures.
Resumo:
The contributions of this paper are twofold: On the one hand, the paper analyses the factors determining the growth in car ownership in Spain over the last two decades, and, on the other, the paper provides empirical evidence for a controversial methodological issue. From a methodological point of view, the paper compares the two alternative decision mechanisms used for modelling car ownership: ordered-response versus unordered-response mechanisms. A discrete choice model is estimated at three points in time: 1980, 1990 and 2000. The study concludes that on the basis of forecasting performance, the multinomial logit model and the ordered probit model are almost undistinguishable. As for the empirical results, it can be emphasised that income elasticity is not constant and declines as car ownership increases. Besides, households living in rural areas are less sensitive than those living in urban areas. Car ownership is also sensitive to the quality of public transport for those living in the largest cities. The results also confirmed the existence of a generation effect, which will vanish around the year 2020, a weak life-cycle effect, and a positive effect of employment on the number of cars per household. Finally, the change in the estimated coefficients over time reflects an increase in mobility needs and, consequently, an increase in car ownership.
Resumo:
In this paper we examine the out-of-sample forecast performance of high-yield credit spreads regarding real-time and revised data on employment and industrial production in the US. We evaluate models using both a point forecast and a probability forecast exercise. Our main findings suggest the use of few factors obtained by pooling information from a number of sector-specific high-yield credit spreads. This can be justified by observing that, especially for employment, there is a gain from using a principal components model fitted to high-yield credit spreads compared to the prediction produced by benchmarks, such as an AR, and ARDL models that use either the term spread or the aggregate high-yield spread as exogenous regressor. Moreover, forecasts based on real-time data are generally comparable to forecasts based on revised data. JEL Classification: C22; C53; E32 Keywords: Credit spreads; Principal components; Forecasting; Real-time data.
Resumo:
Pensions together with savings and investments during active life are key elements of retirement planning. Motivation for personal choices about the standard of living, bequest and the replacement ratio of pension with respect to last salary income must be considered. This research contributes to the financial planning by helping to quantify long-term care economic needs. We estimate life expectancy from retirement age onwards. The economic cost of care per unit of service is linked to the expected time of needed care and the intensity of required services. The expected individual cost of long-term care from an onset of dependence is estimated separately for men and women. Assumptions on the mortality of the dependent people compared to the general population are introduced. Parameters defining eligibility for various forms of coverage by the universal public social care of the welfare system are addressed. The impact of the intensity of social services on individual predictions is assessed, and a partial coverage by standard private insurance products is also explored. Data were collected by the Spanish Institute of Statistics in two surveys conducted on the general Spanish population in 1999 and in 2008. Official mortality records and life table trends were used to create realistic scenarios for longevity. We find empirical evidence that the public long-term care system in Spain effectively mitigates the risk of incurring huge lifetime costs. We also find that the most vulnerable categories are citizens with moderate disabilities that do not qualify to obtain public social care support. In the Spanish case, the trends between 1999 and 2008 need to be further explored.
Resumo:
This paper proposes a contemporaneous-threshold multivariate smooth transition autoregressive (C-MSTAR) model in which the regime weights depend on the ex ante probabilities that latent regime-specific variables exceed certain threshold values. A key feature of the model is that the transition function depends on all the parameters of the model as well as on the data. Since the mixing weights are also a function of the regime-specific innovation covariance matrix, the model can account for contemporaneous regime-specific co-movements of the variables. The stability and distributional properties of the proposed model are discussed, as well as issues of estimation, testing and forecasting. The practical usefulness of the C-MSTAR model is illustrated by examining the relationship between US stock prices and interest rates.
Resumo:
Aquest projecte proposa materials didàctics per a un nou plantejament de les assignatures de Matemàtiques dels primers cursos de Ciències Empresarials i d'Enginyeria Tècnica, més acord amb el procés de convergència europea, basat en la realització de projectes que anomenem “Tallers de Modelització Matemàtica” (TMM) en els quals: (1) Els alumnes parteixen de situacions i problemes reals per als quals han de construir per sí mateixos els models matemàtics més adients i, a partir de la manipulació adequada d’aquests models, poden obtenir la informació necessària per donar-los resposta. (2) El treball de construcció, experimentació i avaluació dels models es realitza amb el suport de la calculadora simbòlica Wiris i del full de càlcul Excel com a instruments “normalitzats” del treball matemàtic d’estudiants i professors. (3) S’adapten els programes de les assignatures de matemàtiques de primer curs per tal de poder-les associar a un petit nombre de Tallers que parteixen de situacions adaptades a cada titulació. L’assignatura de Matemàtiques per a les Ciències Empresarials s’articula entorn de dos tallers independents: “Matrius de transició” pel que fa a l’àlgebra lineal i “Previsió de vendes” per a la modelització funcional en una variable. L’assignatura de Matemàtiques per a l’Enginyeria s’articula entorn d’un únic taller, “Models de poblacions”, que abasta la majoria de continguts del curs: successions i models funcionals en una variable, àlgebra lineal i equacions diferencials. Un conjunt d’exercicis interactius basats en la calculadora simbòlica WIRIS (Wiris-player) serveix de suport per al treball tècnic imprescindible per al desenvolupament de les dues assignatures. L’experimentació d’aquests tallers durant 2 cursos consecutius (2006/07 i 2007/08) en dues universitats catalanes (URL i UAB) ha posat en evidència tant els innegables avantatges del nou dispositiu docent per a l’aprenentatge dels estudiants, així com les restriccions institucionals que actualment dificulten la seva gestió i difusió.
Resumo:
El present projecte, de caire teòric, pretén ser una aproximació als conceptes de vigilància tecnològica o intel·ligència competitiva, la seva relació amb la gestió del coneixement, el que signifiquen i quina és la situació d'aquestes disciplines en el nostre entorn proper. Tan mateix, i partint d'aquesta base de coneixement teòric, també s'ha treballat en el plantejament del que podria ser una metodologia d'aplicació en una organització del concepte d'inteligència competitiva, les seves etapes a seguir i el que cal coordinar o tenir present en cadascuna d'elles.
Resumo:
Through this study, we will measure how the collective MPI operations behaves in virtual and physical clusters, and its impact on the application performance. As we stated before, we will use as a test case the Weather Research and Forecasting simulations.
Resumo:
Quantitative or algorithmic trading is the automatization of investments decisions obeying a fixed or dynamic sets of rules to determine trading orders. It has increasingly made its way up to 70% of the trading volume of one of the biggest financial markets such as the New York Stock Exchange (NYSE). However, there is not a signi cant amount of academic literature devoted to it due to the private nature of investment banks and hedge funds. This projects aims to review the literature and discuss the models available in a subject that publications are scarce and infrequently. We review the basic and fundamental mathematical concepts needed for modeling financial markets such as: stochastic processes, stochastic integration and basic models for prices and spreads dynamics necessary for building quantitative strategies. We also contrast these models with real market data with minutely sampling frequency from the Dow Jones Industrial Average (DJIA). Quantitative strategies try to exploit two types of behavior: trend following or mean reversion. The former is grouped in the so-called technical models and the later in the so-called pairs trading. Technical models have been discarded by financial theoreticians but we show that they can be properly cast into a well defined scientific predictor if the signal generated by them pass the test of being a Markov time. That is, we can tell if the signal has occurred or not by examining the information up to the current time; or more technically, if the event is F_t-measurable. On the other hand the concept of pairs trading or market neutral strategy is fairly simple. However it can be cast in a variety of mathematical models ranging from a method based on a simple euclidean distance, in a co-integration framework or involving stochastic differential equations such as the well-known Ornstein-Uhlenbeck mean reversal ODE and its variations. A model for forecasting any economic or financial magnitude could be properly defined with scientific rigor but it could also lack of any economical value and be considered useless from a practical point of view. This is why this project could not be complete without a backtesting of the mentioned strategies. Conducting a useful and realistic backtesting is by no means a trivial exercise since the \laws" that govern financial markets are constantly evolving in time. This is the reason because we make emphasis in the calibration process of the strategies' parameters to adapt the given market conditions. We find out that the parameters from technical models are more volatile than their counterpart form market neutral strategies and calibration must be done in a high-frequency sampling manner to constantly track the currently market situation. As a whole, the goal of this project is to provide an overview of a quantitative approach to investment reviewing basic strategies and illustrating them by means of a back-testing with real financial market data. The sources of the data used in this project are Bloomberg for intraday time series and Yahoo! for daily prices. All numeric computations and graphics used and shown in this project were implemented in MATLAB^R scratch from scratch as a part of this thesis. No other mathematical or statistical software was used.
Resumo:
Through the history of Electrical Engineering education, vectorial and phasorial diagrams have been used as a fundamental learning tool. At present, computational power has replaced them by long data lists, the result of solving equation systems by means of numerical methods. In this sense, diagrams have been shifted to an academic background and although theoretically explained, they are not used in a practical way within specific examples. This fact may be against the understanding of the complex behavior of the electrical power systems by students. This article proposes a modification of the classical Perrine-Baum diagram construction to allowing both a more practical representation and a better understanding of the behavior of a high-voltage electric line under different levels of load. This modification allows, at the same time, the forecast of the obsolescence of this behavior and line’s loading capacity. Complementary, we evaluate the impact of this tool in the learning process showing comparative undergraduate results during three academic years
Resumo:
I describe the customer valuations game, a simple intuitive game that can serve as a foundation for teaching revenue management. The game requires little or no preparation, props or software, takes around two hours (and hence can be finished in one session), and illustrates the formation of classical (airline and hotel) revenue management mechanisms such as advanced purchase discounts, booking limits and fixed multiple prices. I normally use the game as a base to introduce RM and to develop RM forecasting and optimization concepts off it. The game is particularly suited for non-technical audiences.
Resumo:
We evaluate conditional predictive densities for U.S. output growth and inflationusing a number of commonly used forecasting models that rely on a large number ofmacroeconomic predictors. More specifically, we evaluate how well conditional predictive densities based on the commonly used normality assumption fit actual realizationsout-of-sample. Our focus on predictive densities acknowledges the possibility that, although some predictors can improve or deteriorate point forecasts, they might have theopposite effect on higher moments. We find that normality is rejected for most modelsin some dimension according to at least one of the tests we use. Interestingly, however,combinations of predictive densities appear to be correctly approximated by a normaldensity: the simple, equal average when predicting output growth and Bayesian modelaverage when predicting inflation.
Resumo:
Any electoral system has an electoral formula that converts voteproportions into parliamentary seats. Pre-electoral polls usually focuson estimating vote proportions and then applying the electoral formulato give a forecast of the parliament's composition. We here describe theproblems arising from this approach: there is always a bias in theforecast. We study the origin of the bias and some methods to evaluateand to reduce it. We propose some rules to compute the sample sizerequired for a given forecast accuracy. We show by Monte Carlo simulationthe performance of the proposed methods using data from Spanish electionsin last years. We also propose graphical methods to visualize how electoralformulae and parliamentary forecasts work (or fail).
Resumo:
It is well accepted that people resist evidence that contradicts their beliefs.Moreover, despite their training, many scientists reject results that are inconsistent withtheir theories. This phenomenon is discussed in relation to the field of judgment anddecision making by describing four case studies. These concern findings that clinical judgment is less predictive than actuarial models; simple methods have proven superiorto more theoretically correct methods in times series forecasting; equal weighting ofvariables is often more accurate than using differential weights; and decisions cansometimes be improved by discarding relevant information. All findings relate to theapparently difficult-to-accept idea that simple models can predict complex phenomenabetter than complex ones. It is true that there is a scientific market place for ideas.However, like its economic counterpart, it is subject to inefficiencies (e.g., thinness,asymmetric information, and speculative bubbles). Unfortunately, the market is only correct in the long-run. The road to enlightenment is bumpy.
Resumo:
A new parameter is introduced: the lightning potential index (LPI), which is a measure of the potential for charge generation and separation that leads to lightning flashes in convective thunderstorms. The LPI is calculated within the charge separation region of clouds between 0 C and 20 C, where the noninductive mechanism involving collisions of ice and graupel particles in the presence of supercooled water is most effective. As shown in several case studies using the Weather Research and Forecasting (WRF) model with explicit microphysics, the LPI is highly correlated with observed lightning. It is suggested that the LPI may be a useful parameter for predicting lightning as well as a tool for improving weather forecasting of convective storms and heavy rainfall.