950 resultados para Model accuracy


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Volatility has a central role in various theoretical and practical applications in financial markets. These include the applications related to portfolio theory, derivatives pricing and financial risk management. Both theoretical and practical applications require good estimates and forecasts for the asset return volatility. The goal of this study is to examine the forecast performance of one of the more recent volatility measures, model-free implied volatility. Model-free implied volatility is extracted from the prices in the option markets, and it aims to provide an unbiased estimate for the market’s expectation on the future level of volatility. Since it is extracted from the option prices, model-free implied volatility should contain all the relevant information that the market participants have. Moreover, model-free implied volatility requires less restrictive assumptions than the commonly used Black-Scholes implied volatility, which means that it should be less biased estimate for the market’s expectations. Therefore, it should also be a better forecast for the future volatility. The forecast performance of model-free implied volatility is evaluated by comparing it to the forecast performance of Black-Scholes implied volatility and GARCH(1,1) forecast. Weekly forecasts for six years period were calculated for the forecasted variable, German stock market index DAX. The data consisted of price observations for DAX index options. The forecast performance was measured using econometric methods, which aimed to capture the biasedness, accuracy and the information content of the forecasts. The results of the study suggest that the forecast performance of model-free implied volatility is superior to forecast performance of GARCH(1,1) forecast. However, the results also suggest that the forecast performance of model-free implied volatility is not as good as the forecast performance of Black-Scholes implied volatility, which is against the hypotheses based on theory. The results of this study are consistent with the majority of prior research on the subject.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The purpose of this study is to examine the impact of the choice of cut-off points, sampling procedures, and the business cycle on the accuracy of bankruptcy prediction models. Misclassification can result in erroneous predictions leading to prohibitive costs to firms, investors and the economy. To test the impact of the choice of cut-off points and sampling procedures, three bankruptcy prediction models are assessed- Bayesian, Hazard and Mixed Logit. A salient feature of the study is that the analysis includes both parametric and nonparametric bankruptcy prediction models. A sample of firms from Lynn M. LoPucki Bankruptcy Research Database in the U. S. was used to evaluate the relative performance of the three models. The choice of a cut-off point and sampling procedures were found to affect the rankings of the various models. In general, the results indicate that the empirical cut-off point estimated from the training sample resulted in the lowest misclassification costs for all three models. Although the Hazard and Mixed Logit models resulted in lower costs of misclassification in the randomly selected samples, the Mixed Logit model did not perform as well across varying business-cycles. In general, the Hazard model has the highest predictive power. However, the higher predictive power of the Bayesian model, when the ratio of the cost of Type I errors to the cost of Type II errors is high, is relatively consistent across all sampling methods. Such an advantage of the Bayesian model may make it more attractive in the current economic environment. This study extends recent research comparing the performance of bankruptcy prediction models by identifying under what conditions a model performs better. It also allays a range of user groups, including auditors, shareholders, employees, suppliers, rating agencies, and creditors' concerns with respect to assessing failure risk.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The purpose of the present experiment was to determine whether learning is optimized when providing the opportunity to observe either segments, or the whole basketball jump shot. Participants performed 50 jump-shots from the free throw line during acquisition, and returned one day later for a 10 shot retention test and a memory recall test of the jump-shot technique. Shot accuracy was assessed on a 5-point scale and technique assessed on a 7-point scale. The number of components recalled correctly by participants assessed mental representation. Retention results showed superior shot technique and recall success for those participants provided control over the frequency and type of modelled information compared to participants not provided control. Furthermore, participants in the self-condition utilized the part-model information more frequently than whole-model information highlighting the effectiveness of providing the learner control over viewing multiple segments of a skill compared to only watching the whole model.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Introduction: Le but de l’étude était d’examiner l’effet des matériaux à empreintes sur la précision et la fiabilité des modèles d’études numériques. Méthodes: Vingt-cinq paires de modèles en plâtre ont été choisies au hasard parmi les dossiers de la clinique d’orthodontie de l’Université de Montréal. Une empreinte en alginate (Kromopan 100), une empreinte en substitut d’alginate (Alginot), et une empreinte en PVS (Aquasil) ont été prises de chaque arcade pour tous les patients. Les empreintes ont été envoyées chez Orthobyte pour la coulée des modèles en plâtre et la numérisation des modèles numériques. Les analyses de Bolton 6 et 12, leurs mesures constituantes, le surplomb vertical (overbite), le surplomb horizontal (overjet) et la longueur d’arcade ont été utilisés pour comparaisons. Résultats : La corrélation entre mesures répétées était de bonne à excellente pour les modèles en plâtre et pour les modèles numériques. La tendance voulait que les mesures répétées sur les modèles en plâtre furent plus fiables. Il existait des différences statistiquement significatives pour l’analyse de Bolton 12, pour la longueur d’arcade mandibulaire, et pour le chevauchement mandibulaire, ce pour tous les matériaux à empreintes. La tendance observée fut que les mesures sur les modèles en plâtre étaient plus petites pour l’analyse de Bolton 12 mais plus grandes pour la longueur d’arcade et pour le chevauchement mandibulaire. Malgré les différences statistiquement significatives trouvées, ces différences n’avaient aucune signification clinique. Conclusions : La précision et la fiabilité du logiciel pour l’analyse complète des modèles numériques sont cliniquement acceptables quand on les compare avec les résultats de l’analyse traditionnelle sur modèles en plâtre.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Ce mémoire de maîtrise présente une nouvelle approche non supervisée pour détecter et segmenter les régions urbaines dans les images hyperspectrales. La méthode proposée n ́ecessite trois étapes. Tout d’abord, afin de réduire le coût calculatoire de notre algorithme, une image couleur du contenu spectral est estimée. A cette fin, une étape de réduction de dimensionalité non-linéaire, basée sur deux critères complémentaires mais contradictoires de bonne visualisation; à savoir la précision et le contraste, est réalisée pour l’affichage couleur de chaque image hyperspectrale. Ensuite, pour discriminer les régions urbaines des régions non urbaines, la seconde étape consiste à extraire quelques caractéristiques discriminantes (et complémentaires) sur cette image hyperspectrale couleur. A cette fin, nous avons extrait une série de paramètres discriminants pour décrire les caractéristiques d’une zone urbaine, principalement composée d’objets manufacturés de formes simples g ́eométriques et régulières. Nous avons utilisé des caractéristiques texturales basées sur les niveaux de gris, la magnitude du gradient ou des paramètres issus de la matrice de co-occurrence combinés avec des caractéristiques structurelles basées sur l’orientation locale du gradient de l’image et la détection locale de segments de droites. Afin de réduire encore la complexité de calcul de notre approche et éviter le problème de la ”malédiction de la dimensionnalité” quand on décide de regrouper des données de dimensions élevées, nous avons décidé de classifier individuellement, dans la dernière étape, chaque caractéristique texturale ou structurelle avec une simple procédure de K-moyennes et ensuite de combiner ces segmentations grossières, obtenues à faible coût, avec un modèle efficace de fusion de cartes de segmentations. Les expérimentations données dans ce rapport montrent que cette stratégie est efficace visuellement et se compare favorablement aux autres méthodes de détection et segmentation de zones urbaines à partir d’images hyperspectrales.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

One of the major concerns of scoliosis patients undergoing surgical treatment is the aesthetic aspect of the surgery outcome. It would be useful to predict the postoperative appearance of the patient trunk in the course of a surgery planning process in order to take into account the expectations of the patient. In this paper, we propose to use least squares support vector regression for the prediction of the postoperative trunk 3D shape after spine surgery for adolescent idiopathic scoliosis. Five dimensionality reduction techniques used in conjunction with the support vector machine are compared. The methods are evaluated in terms of their accuracy, based on the leave-one-out cross-validation performed on a database of 141 cases. The results indicate that the 3D shape predictions using a dimensionality reduction obtained by simultaneous decomposition of the predictors and response variables have the best accuracy.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

There are many ways to generate geometrical models for numerical simulation, and most of them start with a segmentation step to extract the boundaries of the regions of interest. This paper presents an algorithm to generate a patient-specific three-dimensional geometric model, based on a tetrahedral mesh, without an initial extraction of contours from the volumetric data. Using the information directly available in the data, such as gray levels, we built a metric to drive a mesh adaptation process. The metric is used to specify the size and orientation of the tetrahedral elements everywhere in the mesh. Our method, which produces anisotropic meshes, gives good results with synthetic and real MRI data. The resulting model quality has been evaluated qualitatively and quantitatively by comparing it with an analytical solution and with a segmentation made by an expert. Results show that our method gives, in 90% of the cases, as good or better meshes as a similar isotropic method, based on the accuracy of the volume reconstruction for a given mesh size. Moreover, a comparison of the Hausdorff distances between adapted meshes of both methods and ground-truth volumes shows that our method decreases reconstruction errors faster. Copyright © 2015 John Wiley & Sons, Ltd.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Nature is full of phenomena which we call "chaotic", the weather being a prime example. What we mean by this is that we cannot predict it to any significant accuracy, either because the system is inherently complex, or because some of the governing factors are not deterministic. However, during recent years it has become clear that random behaviour can occur even in very simple systems with very few number of degrees of freedom, without any need for complexity or indeterminacy. The discovery that chaos can be generated even with the help of systems having completely deterministic rules - often models of natural phenomena - has stimulated a lo; of research interest recently. Not that this chaos has no underlying order, but it is of a subtle kind, that has taken a great deal of ingenuity to unravel. In the present thesis, the author introduce a new nonlinear model, a ‘modulated’ logistic map, and analyse it from the view point of ‘deterministic chaos‘.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper investigates certain methods of training adopted in the Statistical Machine Translator (SMT) from English to Malayalam. In English Malayalam SMT, the word to word translation is determined by training the parallel corpus. Our primary goal is to improve the alignment model by reducing the number of possible alignments of all sentence pairs present in the bilingual corpus. Incorporating morphological information into the parallel corpus with the help of the parts of speech tagger has brought around better training results with improved accuracy

Relevância:

30.00% 30.00%

Publicador:

Resumo:

To study the behaviour of beam-to-column composite connection more sophisticated finite element models is required, since component model has some severe limitations. In this research a generic finite element model for composite beam-to-column joint with welded connections is developed using current state of the art local modelling. Applying mechanically consistent scaling method, it can provide the constitutive relationship for a plane rectangular macro element with beam-type boundaries. Then, this defined macro element, which preserves local behaviour and allows for the transfer of five independent states between local and global models, can be implemented in high-accuracy frame analysis with the possibility of limit state checks. In order that macro element for scaling method can be used in practical manner, a generic geometry program as a new idea proposed in this study is also developed for this finite element model. With generic programming a set of global geometric variables can be input to generate a specific instance of the connection without much effort. The proposed finite element model generated by this generic programming is validated against testing results from University of Kaiserslautern. Finally, two illustrative examples for applying this macro element approach are presented. In the first example how to obtain the constitutive relationships of macro element is demonstrated. With certain assumptions for typical composite frame the constitutive relationships can be represented by bilinear laws for the macro bending and shear states that are then coupled by a two-dimensional surface law with yield and failure surfaces. In second example a scaling concept that combines sophisticated local models with a frame analysis using a macro element approach is presented as a practical application of this numerical model.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This work extends a previously developed research concerning about the use of local model predictive control in differential driven mobile robots. Hence, experimental results are presented as a way to improve the methodology by considering aspects as trajectory accuracy and time performance. In this sense, the cost function and the prediction horizon are important aspects to be considered. The aim of the present work is to test the control method by measuring trajectory tracking accuracy and time performance. Moreover, strategies for the integration with perception system and path planning are briefly introduced. In this sense, monocular image data can be used to plan safety trajectories by using goal attraction potential fields

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper a novel rank estimation technique for trajectories motion segmentation within the Local Subspace Affinity (LSA) framework is presented. This technique, called Enhanced Model Selection (EMS), is based on the relationship between the estimated rank of the trajectory matrix and the affinity matrix built by LSA. The results on synthetic and real data show that without any a priori knowledge, EMS automatically provides an accurate and robust rank estimation, improving the accuracy of the final motion segmentation

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A simple numerical model which calculates the kinetics of crystallization involving randomly distributed nucleation and isotropic growth is presented. The model can be applied to different thermal histories and no restrictions are imposed on the time and the temperature dependences of the nucleation and growth rates. We also develop an algorithm which evaluates the corresponding emerging grain-size distribution. The algorithm is easy to implement and particularly flexible, making it possible to simulate several experimental conditions. Its simplicity and minimal computer requirements allow high accuracy for two- and three-dimensional growth simulations. The algorithm is applied to explore the grain morphology development during isothermal treatments for several nucleation regimes. In particular, thermal nucleation, preexisting nuclei, and the combination of both nucleation mechanisms are analyzed. For the first two cases, the universal grain-size distribution is obtained. The high accuracy of the model is stated from its comparison to analytical predictions. Finally, the validity of the Kolmogorov-Johnson-Mehl-Avrami model SSSR, is verified for all the cases studied

Relevância:

30.00% 30.00%

Publicador:

Resumo:

En les últimes dècades, l'increment dels nivells de radiació solar ultraviolada (UVR) que arriba a la Terra (principalment degut a la disminució d'ozó estratosfèric) juntament amb l'augment detectat en malalties relacionades amb l'exposició a la UVR, ha portat a un gran volum d'investigacions sobre la radiació solar en aquesta banda i els seus efectes en els humans. L'índex ultraviolat (UVI), que ha estat adoptat internacionalment, va ser definit amb el propòsit d'informar al públic general sobre els riscos d'exposar el cos nu a la UVR i per tal d'enviar missatges preventius. L'UVI es va definir inicialment com el valor màxim diari. No obstant, el seu ús actual s'ha ampliat i té sentit referir-se a un valor instantani o a una evolució diària del valor d'UVI mesurat, modelitzat o predit. El valor concret d'UVI està afectat per la geometria Sol-Terra, els núvols, l'ozó, els aerosols, l'altitud i l'albedo superficial. Les mesures d'UVI d'alta qualitat són essencials com a referència i per estudiar tendències a llarg termini; es necessiten també tècniques acurades de modelització per tal d'entendre els factors que afecten la UVR, per predir l'UVI i com a control de qualitat de les mesures. És d'esperar que les mesures més acurades d'UVI s'obtinguin amb espectroradiòmetres. No obstant, com que els costs d'aquests dispositius són elevats, és més habitual trobar dades d'UVI de radiòmetres eritemàtics (de fet, la majoria de les xarxes d'UVI estan equipades amb aquest tipus de sensors). Els millors resultats en modelització s'obtenen amb models de transferència radiativa de dispersió múltiple quan es coneix bé la informació d'entrada. No obstant, habitualment no es coneix informació d'entrada, com per exemple les propietats òptiques dels aerosols, la qual cosa pot portar a importants incerteses en la modelització. Sovint, s'utilitzen models més simples per aplicacions com ara la predicció d'UVI o l'elaboració de mapes d'UVI, ja que aquests són més ràpids i requereixen menys paràmetres d'entrada. Tenint en compte aquest marc de treball, l'objectiu general d'aquest estudi és analitzar l'acord al qual es pot arribar entre la mesura i la modelització d'UVI per condicions de cel sense núvols. D'aquesta manera, en aquest estudi es presenten comparacions model-mesura per diferents tècniques de modelització, diferents opcions d'entrada i per mesures d'UVI tant de radiòmetres eritemàtics com d'espectroradiòmeters. Com a conclusió general, es pot afirmar que la comparació model-mesura és molt útil per detectar limitacions i estimar incerteses tant en les modelitzacions com en les mesures. Pel que fa a la modelització, les principals limitacions que s'han trobat és la falta de coneixement de la informació d'aerosols considerada com a entrada dels models. També, s'han trobat importants diferències entre l'ozó mesurat des de satèl·lit i des de la superfície terrestre, la qual cosa pot portar a diferències importants en l'UVI modelitzat. PTUV, una nova i simple parametrització pel càlcul ràpid d'UVI per condicions de cel serens, ha estat desenvolupada en base a càlculs de transferència radiativa. La parametrització mostra una bona execució tant respecte el model base com en comparació amb diverses mesures d'UVI. PTUV ha demostrat la seva utilitat per aplicacions particulars com ara l'estudi de l'evolució anual de l'UVI per un cert lloc (Girona) i la composició de mapes d'alta resolució de valors d'UVI típics per un territori concret (Catalunya). En relació a les mesures, es constata que és molt important saber la resposta espectral dels radiòmetres eritemàtics per tal d'evitar grans incerteses a la mesura d'UVI. Aquest instruments, si estan ben caracteritzats, mostren una bona comparació amb els espectroradiòmetres d'alta qualitat en la mesura d'UVI. Les qüestions més importants respecte les mesures són la calibració i estabilitat a llarg termini. També, s'ha observat un efecte de temperatura en el PTFE, un material utilitzat en els difusors en alguns instruments, cosa que potencialment podria tenir implicacions importants en el camp experimental. Finalment, i pel que fa a les comparacions model-mesura, el millor acord s'ha trobat quan es consideren mesures d'UVI d'espectroradiòmetres d'alta qualitat i s'usen models de transferència radiativa que consideren les millors dades disponibles pel que fa als paràmetres òptics d'ozó i aerosols i els seus canvis en el temps. D'aquesta manera, l'acord pot ser tan alt dins un 0.1º% en UVI, i típicament entre menys d'un 3%. Aquest acord es veu altament deteriorat si s'ignora la informació d'aerosols i depèn de manera important del valor d'albedo de dispersió simple dels aerosols. Altres dades d'entrada del model, com ara l'albedo superficial i els perfils d'ozó i temperatura introdueixen una incertesa menor en els resultats de modelització.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The purpose of this work was to establish a taxonomy of hand made model construction as a platform for an approach to project an operative method in architecture. It was therefore studied and catalogued in a systematic approach a broad model production in the work of ARX. A wide range of families and sub-families of models were found, with different purposes according to each phase of development, from searching steps for a new possible configuration to detailed refined decisions. This working method revealed as most relevant characteristics, the grounds for a potential personal reflection and open discussion on project method, its flexibility on space modeling, an accuracy on the representation of real construction situations and its constant and stimulating opening to new suggestions. This research helped on a meta-reflection about this method, having been useful on creating a consciousness of processes that pretend to become an autonomous language, knowledge that might become useful to those who pretend to implement a haptic modus operandi in the work of an architectural project.