960 resultados para Maximum-entropy selection criterion
Resumo:
The aim of this thesis is to examine whether the pricing anomalies exists in the Finnish stock markets by comparing the performance of quantile portfolios that are formed on the basis of either individual valuation ratios, composite value measures or combined value and momentum indicators. All the research papers included in the thesis show evidence of value anomalies in the Finnish stock markets. In the first paper, the sample of stocks over the 1991-2006 period is divided into quintile portfolios based on four individual valuation ratios (i.e., E/P, EBITDA/EV, B/P, and S/P) and three hybrids of them (i.e. composite value measures). The results show the superiority of composite value measures as selection criterion for value stocks, particularly when EBITDA/EV is employed as earnings multiple. The main focus of the second paper is on the impact of the holding period length on performance of value strategies. As an extension to the first paper, two more individual ratios (i.e. CF/P and D/P) are included in the comparative analysis. The sample of stocks over 1993- 2008 period is divided into tercile portfolios based on six individual valuation ratios and three hybrids of them. The use of either dividend yield criterion or one of three composite value measures being examined results in best value portfolio performance according to all performance metrics used. Parallel to the findings of many international studies, our results from performance comparisons indicate that for the sample data employed, the yearly reformation of portfolios is not necessarily optimal in order to maximally gain from the value premium. Instead, the value investor may extend his holding period up to 5 years without any decrease in long-term portfolio performance. The same holds also for the results of the third paper that examines the applicability of data envelopment analysis (DEA) method in discriminating the undervalued stocks from overvalued ones. The fourth paper examines the added value of combining price momentum with various value strategies. Taking account of the price momentum improves the performance of value portfolios in most cases. The performance improvement is greatest for value portfolios that are formed on the basis of the 3-composite value measure which consists of D/P, B/P and EBITDA/EV ratios. The risk-adjusted performance can be enhanced further by following 130/30 long-short strategy in which the long position of value winner stocks is leveraged by 30 percentages while simultaneously selling short glamour loser stocks by the same amount. Average return of the long-short position proved to be more than double stock market average coupled with the volatility decrease. The fifth paper offers a new approach to combine value and momentum indicators into a single portfolio-formation criterion using different variants of DEA models. The results throughout the 1994-2010 sample period shows that the top-tercile portfolios outperform both the market portfolio and the corresponding bottom-tercile portfolios. In addition, the middle-tercile portfolios also outperform the comparable bottom-tercile portfolios when DEA models are used as a basis for stock classification criteria. To my knowledge, such strong performance differences have not been reported in earlier peer-reviewed studies that have employed the comparable quantile approach of dividing stocks into portfolios. Consistently with the previous literature, the division of the full sample period into bullish and bearish periods reveals that the top-quantile DEA portfolios lose far less of their value during the bearish conditions than do the corresponding bottom portfolios. The sixth paper extends the sample period employed in the fourth paper by one year (i.e. 1993- 2009) covering also the first years of the recent financial crisis. It contributes to the fourth paper by examining the impact of the stock market conditions on the main results. Consistently with the fifth paper, value portfolios lose much less of their value during bearish conditions than do stocks on average. The inclusion of a momentum criterion somewhat adds value to an investor during bullish conditions, but this added value turns to negative during bearish conditions. During bear market periods some of the value loser portfolios perform even better than their value winner counterparts. Furthermore, the results show that the recent financial crisis has reduced the added value of using combinations of momentum and value indicators as portfolio formation criteria. However, since the stock markets have historically been bullish more often than bearish, the combination of the value and momentum criteria has paid off to the investor despite the fact that its added value during bearish periods is negative, on an average.
Resumo:
This work present the application of a computer package for generating of projection data for neutron computerized tomography, and in second part, discusses an application of neutron tomography, using the projection data obtained by Monte Carlo technique, for the detection and localization of light materials such as those containing hydrogen, concealed by heavy materials such as iron and lead. For tomographic reconstructions of the samples simulated use was made of only six equal projection angles distributed between 0º and 180º, with reconstruction making use of an algorithm (ARIEM), based on the principle of maximum entropy. With the neutron tomography it was possible to detect and locate polyethylene and water hidden by lead and iron (with 1cm-thick). Thus, it is demonstrated that thermal neutrons tomography is a viable test method which can provide important interior information about test components, so, extremely useful in routine industrial applications.
Resumo:
A linear prediction procedure is one of the approved numerical methods of signal processing. In the field of optical spectroscopy it is used mainly for extrapolation known parts of an optical signal in order to obtain a longer one or deduce missing signal samples. The first is needed particularly when narrowing spectral lines for the purpose of spectral information extraction. In the present paper the coherent anti-Stokes Raman scattering (CARS) spectra were under investigation. The spectra were significantly distorted by the presence of nonlinear nonresonant background. In addition, line shapes were far from Gaussian/Lorentz profiles. To overcome these disadvantages the maximum entropy method (MEM) for phase spectrum retrieval was used. The obtained broad MEM spectra were further underwent the linear prediction analysis in order to be narrowed.
Resumo:
Tämä kandidaatintutkielma keskittyy toimittajien valintakriteereihin sekä yleisesti että valitussa tapausyrityksessä. Toimittajanvalintakriteereitä pohjustetaan katsauksella toimittajanvalintaprosessiin ja toimittajien luokitteluun ja siltä pohjalta käsitellään kriteereitä yleisellä tasolla ja eritasoisissa toimittajasuhteissa.
Resumo:
Maintenance of thermal homeostasis in rats fed a high-fat diet (HFD) is associated with changes in their thermal balance. The thermodynamic relationship between heat dissipation and energy storage is altered by the ingestion of high-energy diet content. Observation of thermal registers of core temperature behavior, in humans and rodents, permits identification of some characteristics of time series, such as autoreference and stationarity that fit adequately to a stochastic analysis. To identify this change, we used, for the first time, a stochastic autoregressive model, the concepts of which match those associated with physiological systems involved and applied in male HFD rats compared with their appropriate standard food intake age-matched male controls (n=7 per group). By analyzing a recorded temperature time series, we were able to identify when thermal homeostasis would be affected by a new diet. The autoregressive time series model (AR model) was used to predict the occurrence of thermal homeostasis, and this model proved to be very effective in distinguishing such a physiological disorder. Thus, we infer from the results of our study that maximum entropy distribution as a means for stochastic characterization of temperature time series registers may be established as an important and early tool to aid in the diagnosis and prevention of metabolic diseases due to their ability to detect small variations in thermal profile.
Resumo:
This thesis investigates the performance of value and momentum strategies in the Swedish stock market during the 2000-2015 sample period. In addition the performance of some value and value-momentum combination is examined. The data consists of all the publicly traded companies in the Swedish stock market between 2000-2015. P/E, P/B, P/S, EV/EBITDA, EV/S ratios and 3, 6 and 12 months value criteria are used in the portfolio formation. In addition to single selection criteria, combination of P/E and P/B (aka. Graham number), the average ranking of the five value criteria and EV/EBIT – 3 month momentum combination is used as a portfolio-formation criterion. The stocks are divided into quintile portfolios based on each selection criterion. The portfolios are reformed once a year using the April’s price information and previous year’s financial information. The performance of the portfolios is examined based on average annual return, the Sharpe ratio and the Jensen alpha. The results show that the value-momentum combination is the best-performing portfolio both during the whole sample period and during the sub-period that started after the 2007-financial crisis.
Resumo:
Mémoire numérisé par la Division de la gestion de documents et des archives de l'Université de Montréal
Resumo:
Dans la sémantique des cadres de Fillmore, les mots prennent leur sens par rapport au contexte événementiel ou situationnel dans lequel ils s’inscrivent. FrameNet, une ressource lexicale pour l’anglais, définit environ 1000 cadres conceptuels, couvrant l’essentiel des contextes possibles. Dans un cadre conceptuel, un prédicat appelle des arguments pour remplir les différents rôles sémantiques associés au cadre (par exemple : Victime, Manière, Receveur, Locuteur). Nous cherchons à annoter automatiquement ces rôles sémantiques, étant donné le cadre sémantique et le prédicat. Pour cela, nous entrainons un algorithme d’apprentissage machine sur des arguments dont le rôle est connu, pour généraliser aux arguments dont le rôle est inconnu. On utilisera notamment des propriétés lexicales de proximité sémantique des mots les plus représentatifs des arguments, en particulier en utilisant des représentations vectorielles des mots du lexique.
Resumo:
There are several factors that affect piglet survival and this has a bearing on sow productivity. Ten variables that influence pre-weaning vitality were analysed using records from the Pig Industry Board, Zimbabwe. These included individual piglet birth weight, piglet origin (nursed in original litter or fostered), sex, relative birth weight expressed as standard deviation units, sow parity, total number of piglets born, year and month of farrowing, within-litter variability and the presence of stillborn or mummified littermates. The main factors that influenced piglet mortality were fostering, parity and within-litter variability especially the weight of the individual piglet relative to the average of the litter (P<0.05). Presence of a mummified or stillborn littermate, which could be a proxy for unfavourable uterine environment or trauma during the birth process, did not influence pre-weaning mortality. Variability within a litter and the deviation of the weight of an individual piglet from the litter mean, influenced survival to weaning. It is, therefore, advisable for breeders to include uniformity within the litter as a selection criterion. The recording of various variables by farmers seems to be a useful management practice to identify piglets at risk so as to establish palliative measures. Further, farmers should know which litters and which piglets within a litter are at risk and require more attention.
Resumo:
La inducción del trabajo de parto ha demostrado aumentar simultáneamente las tasas de cesárea, especialmente en nulíparas con cérvix clínicamente desfavorables. Ya que la valoración clínica del cérvix es un método subjetivo, aunque ampliamente utilizado, el objetivo del presente estudio fue determinar la utilidad de la medición ecográfica de la longitud cervical comparándola con el puntaje de Bishop, en la predicción del éxito de la inducción del parto en las pacientes nulíparas en el servicio de Obstetricia del Hospital Universitario Clínica San Rafael, Bogotá. Materiales y métodos: Se realizó un estudio observacional, evaluando una cohorte prospectiva de 80 gestantes a quienes se les realizó valoración ultrasonográfica y clínica del cérvix antes de iniciar la inducción del trabajo de parto. Resultados: El análisis bivariado demostró que las pacientes con longitud cervical >20mm tienen 1.57 veces la probabilidad de tener parto por cesárea (RR 1.57 IC95% 1.03-2.39 p <0.05). De manera similar las pacientes con puntaje de Bishop 0 a 3 tienen 2.33 veces la probabilidad de tener parto por cesárea (RR 2.33 IC95% 1.28-4.23 p <0.05). La regresión logística binaria demostró que la edad materna y la longitud cervical fueron los únicos parámetros independientes con significancia estadística para predecir el éxito de la inducción. Conclusiones: La medición ecográfica de la longitud cervical tiene mayor utilidad que la valoración clínica del cérvix en la predicción del éxito de la inducción del parto en nulíparas.
Resumo:
Severe wind storms are one of the major natural hazards in the extratropics and inflict substantial economic damages and even casualties. Insured storm-related losses depend on (i) the frequency, nature and dynamics of storms, (ii) the vulnerability of the values at risk, (iii) the geographical distribution of these values, and (iv) the particular conditions of the risk transfer. It is thus of great importance to assess the impact of climate change on future storm losses. To this end, the current study employs—to our knowledge for the first time—a coupled approach, using output from high-resolution regional climate model scenarios for the European sector to drive an operational insurance loss model. An ensemble of coupled climate-damage scenarios is used to provide an estimate of the inherent uncertainties. Output of two state-of-the-art global climate models (HadAM3, ECHAM5) is used for present (1961–1990) and future climates (2071–2100, SRES A2 scenario). These serve as boundary data for two nested regional climate models with a sophisticated gust parametrizations (CLM, CHRM). For validation and calibration purposes, an additional simulation is undertaken with the CHRM driven by the ERA40 reanalysis. The operational insurance model (Swiss Re) uses a European-wide damage function, an average vulnerability curve for all risk types, and contains the actual value distribution of a complete European market portfolio. The coupling between climate and damage models is based on daily maxima of 10 m gust winds, and the strategy adopted consists of three main steps: (i) development and application of a pragmatic selection criterion to retrieve significant storm events, (ii) generation of a probabilistic event set using a Monte-Carlo approach in the hazard module of the insurance model, and (iii) calibration of the simulated annual expected losses with a historic loss data base. The climate models considered agree regarding an increase in the intensity of extreme storms in a band across central Europe (stretching from southern UK and northern France to Denmark, northern Germany into eastern Europe). This effect increases with event strength, and rare storms show the largest climate change sensitivity, but are also beset with the largest uncertainties. Wind gusts decrease over northern Scandinavia and Southern Europe. Highest intra-ensemble variability is simulated for Ireland, the UK, the Mediterranean, and parts of Eastern Europe. The resulting changes on European-wide losses over the 110-year period are positive for all layers and all model runs considered and amount to 44% (annual expected loss), 23% (10 years loss), 50% (30 years loss), and 104% (100 years loss). There is a disproportionate increase in losses for rare high-impact events. The changes result from increases in both severity and frequency of wind gusts. Considerable geographical variability of the expected losses exists, with Denmark and Germany experiencing the largest loss increases (116% and 114%, respectively). All countries considered except for Ireland (−22%) experience some loss increases. Some ramifications of these results for the socio-economic sector are discussed, and future avenues for research are highlighted. The technique introduced in this study and its application to realistic market portfolios offer exciting prospects for future research on the impact of climate change that is relevant for policy makers, scientists and economists.
Resumo:
An efficient model identification algorithm for a large class of linear-in-the-parameters models is introduced that simultaneously optimises the model approximation ability, sparsity and robustness. The derived model parameters in each forward regression step are initially estimated via the orthogonal least squares (OLS), followed by being tuned with a new gradient-descent learning algorithm based on the basis pursuit that minimises the l(1) norm of the parameter estimate vector. The model subset selection cost function includes a D-optimality design criterion that maximises the determinant of the design matrix of the subset to ensure model robustness and to enable the model selection procedure to automatically terminate at a sparse model. The proposed approach is based on the forward OLS algorithm using the modified Gram-Schmidt procedure. Both the parameter tuning procedure, based on basis pursuit, and the model selection criterion, based on the D-optimality that is effective in ensuring model robustness, are integrated with the forward regression. As a consequence the inherent computational efficiency associated with the conventional forward OLS approach is maintained in the proposed algorithm. Examples demonstrate the effectiveness of the new approach.
Resumo:
Many kernel classifier construction algorithms adopt classification accuracy as performance metrics in model evaluation. Moreover, equal weighting is often applied to each data sample in parameter estimation. These modeling practices often become problematic if the data sets are imbalanced. We present a kernel classifier construction algorithm using orthogonal forward selection (OFS) in order to optimize the model generalization for imbalanced two-class data sets. This kernel classifier identification algorithm is based on a new regularized orthogonal weighted least squares (ROWLS) estimator and the model selection criterion of maximal leave-one-out area under curve (LOO-AUC) of the receiver operating characteristics (ROCs). It is shown that, owing to the orthogonalization procedure, the LOO-AUC can be calculated via an analytic formula based on the new regularized orthogonal weighted least squares parameter estimator, without actually splitting the estimation data set. The proposed algorithm can achieve minimal computational expense via a set of forward recursive updating formula in searching model terms with maximal incremental LOO-AUC value. Numerical examples are used to demonstrate the efficacy of the algorithm.
Resumo:
A connection between a fuzzy neural network model with the mixture of experts network (MEN) modelling approach is established. Based on this linkage, two new neuro-fuzzy MEN construction algorithms are proposed to overcome the curse of dimensionality that is inherent in the majority of associative memory networks and/or other rule based systems. The first construction algorithm employs a function selection manager module in an MEN system. The second construction algorithm is based on a new parallel learning algorithm in which each model rule is trained independently, for which the parameter convergence property of the new learning method is established. As with the first approach, an expert selection criterion is utilised in this algorithm. These two construction methods are equivalent in their effectiveness in overcoming the curse of dimensionality by reducing the dimensionality of the regression vector, but the latter has the additional computational advantage of parallel processing. The proposed algorithms are analysed for effectiveness followed by numerical examples to illustrate their efficacy for some difficult data based modelling problems.
Resumo:
This paper presents the theoretical development of a nonlinear adaptive filter based on a concept of filtering by approximated densities (FAD). The most common procedures for nonlinear estimation apply the extended Kalman filter. As opposed to conventional techniques, the proposed recursive algorithm does not require any linearisation. The prediction uses a maximum entropy principle subject to constraints. Thus, the densities created are of an exponential type and depend on a finite number of parameters. The filtering yields recursive equations involving these parameters. The update applies the Bayes theorem. Through simulation on a generic exponential model, the proposed nonlinear filter is implemented and the results prove to be superior to that of the extended Kalman filter and a class of nonlinear filters based on partitioning algorithms.