988 resultados para statistical framework
Resumo:
First: A continuous-time version of Kyle's model (Kyle 1985), known as the Back's model (Back 1992), of asset pricing with asymmetric information, is studied. A larger class of price processes and of noise traders' processes are studied. The price process, as in Kyle's model, is allowed to depend on the path of the market order. The process of the noise traders' is an inhomogeneous Lévy process. Solutions are found by the Hamilton-Jacobi-Bellman equations. With the insider being risk-neutral, the price pressure is constant, and there is no equilibirium in the presence of jumps. If the insider is risk-averse, there is no equilibirium in the presence of either jumps or drifts. Also, it is analised when the release time is unknown. A general relation is established between the problem of finding an equilibrium and of enlargement of filtrations. Random announcement time is random is also considered. In such a case the market is not fully efficient and there exists equilibrium if the sensitivity of prices with respect to the global demand is time decreasing according with the distribution of the random time. Second: Power variations. it is considered, the asymptotic behavior of the power variation of processes of the form _integral_0^t u(s-)dS(s), where S_ is an alpha-stable process with index of stability 0&alpha&2 and the integral is an Itô integral. Stable convergence of corresponding fluctuations is established. These results provide statistical tools to infer the process u from discrete observations. Third: A bond market is studied where short rates r(t) evolve as an integral of g(t-s)sigma(s) with respect to W(ds), where g and sigma are deterministic and W is the stochastic Wiener measure. Processes of this type are particular cases of ambit processes. These processes are in general not of the semimartingale kind.
Resumo:
Este proyecto de final de carrera va encaminado al análisis, diseño e implementación de componentes de software, usualmente llamado framework, con la idea de servir de base para desarrollos sencillos directos así como para rediseños escalares posteriores. El objetivo es triple: estudiar y evaluar las diferentes alternativas existentes para implementar la capa de persistencia para aplicaciones JEE de cliente delgado, hacer el diseño y la implementación de un framework de mapeo entidad-relación y, por último, construir una aplicación de ejemplo que muestre el uso del framework que ha implementado.
Resumo:
BACKGROUND The effect of the macronutrient composition of the usual diet on long term weight maintenance remains controversial. METHODS 373,803 subjects aged 25-70 years were recruited in 10 European countries (1992-2000) in the PANACEA project of the EPIC cohort. Diet was assessed at baseline using country-specific validated questionnaires and weight and height were measured at baseline and self-reported at follow-up in most centers. The association between weight change after 5 years of follow-up and the iso-energetic replacement of 5% of energy from one macronutrient by 5% of energy from another macronutrient was assessed using multivariate linear mixed-models. The risk of becoming overweight or obese after 5 years was investigated using multivariate Poisson regressions stratified according to initial Body Mass Index. RESULTS A higher proportion of energy from fat at the expense of carbohydrates was not significantly associated with weight change after 5 years. However, a higher proportion of energy from protein at the expense of fat was positively associated with weight gain. A higher proportion of energy from protein at the expense of carbohydrates was also positively associated with weight gain, especially when carbohydrates were rich in fibre. The association between percentage of energy from protein and weight change was slightly stronger in overweight participants, former smokers, participants ≥60 years old, participants underreporting their energy intake and participants with a prudent dietary pattern. Compared to diets with no more than 14% of energy from protein, diets with more than 22% of energy from protein were associated with a 23-24% higher risk of becoming overweight or obese in normal weight and overweight subjects at baseline. CONCLUSION Our results show that participants consuming an amount of protein above the protein intake recommended by the American Diabetes Association may experience a higher risk of becoming overweight or obese during adult life.
Resumo:
Aquest treball de fi de carrera ha tingut com a objectius l'estudi dels diferents elements que existeixen a l'hora de construir la part visual d'aplicacions Web desenvolupades sobre la plataforma de construcció de J2EE, els patrons de Disseny de la capa de presentació i allò que es denominen Frameworks de presentació. I d'altra banda a partir de l'esmentat estudi es a realitzat la creació d'un Framework propi que permeti la creació d'interfícies per a pàgines Web creades per a la gestió de maneig d'aplicacions empresarials d'una forma òptima, estàndard i simplificada.
Resumo:
BACKGROUND Identifying individuals at high risk of excess weight gain may help targeting prevention efforts at those at risk of various metabolic diseases associated with weight gain. Our aim was to develop a risk score to identify these individuals and validate it in an external population. METHODS We used lifestyle and nutritional data from 53°758 individuals followed for a median of 5.4 years from six centers of the European Prospective Investigation into Cancer and Nutrition (EPIC) to develop a risk score to predict substantial weight gain (SWG) for the next 5 years (derivation sample). Assuming linear weight gain, SWG was defined as gaining ≥ 10% of baseline weight during follow-up. Proportional hazards models were used to identify significant predictors of SWG separately by EPIC center. Regression coefficients of predictors were pooled using random-effects meta-analysis. Pooled coefficients were used to assign weights to each predictor. The risk score was calculated as a linear combination of the predictors. External validity of the score was evaluated in nine other centers of the EPIC study (validation sample). RESULTS Our final model included age, sex, baseline weight, level of education, baseline smoking, sports activity, alcohol use, and intake of six food groups. The model's discriminatory ability measured by the area under a receiver operating characteristic curve was 0.64 (95% CI = 0.63-0.65) in the derivation sample and 0.57 (95% CI = 0.56-0.58) in the validation sample, with variation between centers. Positive and negative predictive values for the optimal cut-off value of ≥ 200 points were 9% and 96%, respectively. CONCLUSION The present risk score confidently excluded a large proportion of individuals from being at any appreciable risk to develop SWG within the next 5 years. Future studies, however, may attempt to further refine the positive prediction of the score.
Resumo:
El objetivo de este proyecto es la construcción de un framework de presentación para el desarrollo de aplicaciones Web basadas en la plataforma J2EE. El proyecto comprende el estudio de las características de los frameworks más importantes disponibles en el mercado, prestando una atención especial a su arquitectura.
Resumo:
Implementación y estudio de un framework de persistencia como solución a una problemática concreta.
Resumo:
Uncertainty quantification of petroleum reservoir models is one of the present challenges, which is usually approached with a wide range of geostatistical tools linked with statistical optimisation or/and inference algorithms. Recent advances in machine learning offer a novel approach to model spatial distribution of petrophysical properties in complex reservoirs alternative to geostatistics. The approach is based of semisupervised learning, which handles both ?labelled? observed data and ?unlabelled? data, which have no measured value but describe prior knowledge and other relevant data in forms of manifolds in the input space where the modelled property is continuous. Proposed semi-supervised Support Vector Regression (SVR) model has demonstrated its capability to represent realistic geological features and describe stochastic variability and non-uniqueness of spatial properties. On the other hand, it is able to capture and preserve key spatial dependencies such as connectivity of high permeability geo-bodies, which is often difficult in contemporary petroleum reservoir studies. Semi-supervised SVR as a data driven algorithm is designed to integrate various kind of conditioning information and learn dependences from it. The semi-supervised SVR model is able to balance signal/noise levels and control the prior belief in available data. In this work, stochastic semi-supervised SVR geomodel is integrated into Bayesian framework to quantify uncertainty of reservoir production with multiple models fitted to past dynamic observations (production history). Multiple history matched models are obtained using stochastic sampling and/or MCMC-based inference algorithms, which evaluate posterior probability distribution. Uncertainty of the model is described by posterior probability of the model parameters that represent key geological properties: spatial correlation size, continuity strength, smoothness/variability of spatial property distribution. The developed approach is illustrated with a fluvial reservoir case. The resulting probabilistic production forecasts are described by uncertainty envelopes. The paper compares the performance of the models with different combinations of unknown parameters and discusses sensitivity issues.
Resumo:
A ubiquitous assessment of swimming velocity (main metric of the performance) is essential for the coach to provide a tailored feedback to the trainee. We present a probabilistic framework for the data-driven estimation of the swimming velocity at every cycle using a low-cost wearable inertial measurement unit (IMU). The statistical validation of the method on 15 swimmers shows that an average relative error of 0.1 ± 9.6% and high correlation with the tethered reference system (rX,Y=0.91 ) is achievable. Besides, a simple tool to analyze the influence of sacrum kinematics on the performance is provided.
Resumo:
Implementació i diseny d'un marc de treball per accelerar la productivitat en el desenvolupament d'aplicacions J2EE.
Resumo:
Familial searching consists of searching for a full profile left at a crime scene in a National DNA Database (NDNAD). In this paper we are interested in the circumstance where no full match is returned, but a partial match is found between a database member's profile and the crime stain. Because close relatives share more of their DNA than unrelated persons, this partial match may indicate that the crime stain was left by a close relative of the person with whom the partial match was found. This approach has successfully solved important crimes in the UK and the USA. In a previous paper, a model, which takes into account substructure and siblings, was used to simulate a NDNAD. In this paper, we have used this model to test the usefulness of familial searching and offer guidelines for pre-assessment of the cases based on the likelihood ratio. Siblings of "persons" present in the simulated Swiss NDNAD were created. These profiles (N=10,000) were used as traces and were then compared to the whole database (N=100,000). The statistical results obtained show that the technique has great potential confirming the findings of previous studies. However, effectiveness of the technique is only one part of the story. Familial searching has juridical and ethical aspects that should not be ignored. In Switzerland for example, there are no specific guidelines to the legality or otherwise of familial searching. This article both presents statistical results, and addresses criminological and civil liberties aspects to take into account risks and benefits of familial searching.
Resumo:
Quantitative or algorithmic trading is the automatization of investments decisions obeying a fixed or dynamic sets of rules to determine trading orders. It has increasingly made its way up to 70% of the trading volume of one of the biggest financial markets such as the New York Stock Exchange (NYSE). However, there is not a signi cant amount of academic literature devoted to it due to the private nature of investment banks and hedge funds. This projects aims to review the literature and discuss the models available in a subject that publications are scarce and infrequently. We review the basic and fundamental mathematical concepts needed for modeling financial markets such as: stochastic processes, stochastic integration and basic models for prices and spreads dynamics necessary for building quantitative strategies. We also contrast these models with real market data with minutely sampling frequency from the Dow Jones Industrial Average (DJIA). Quantitative strategies try to exploit two types of behavior: trend following or mean reversion. The former is grouped in the so-called technical models and the later in the so-called pairs trading. Technical models have been discarded by financial theoreticians but we show that they can be properly cast into a well defined scientific predictor if the signal generated by them pass the test of being a Markov time. That is, we can tell if the signal has occurred or not by examining the information up to the current time; or more technically, if the event is F_t-measurable. On the other hand the concept of pairs trading or market neutral strategy is fairly simple. However it can be cast in a variety of mathematical models ranging from a method based on a simple euclidean distance, in a co-integration framework or involving stochastic differential equations such as the well-known Ornstein-Uhlenbeck mean reversal ODE and its variations. A model for forecasting any economic or financial magnitude could be properly defined with scientific rigor but it could also lack of any economical value and be considered useless from a practical point of view. This is why this project could not be complete without a backtesting of the mentioned strategies. Conducting a useful and realistic backtesting is by no means a trivial exercise since the \laws" that govern financial markets are constantly evolving in time. This is the reason because we make emphasis in the calibration process of the strategies' parameters to adapt the given market conditions. We find out that the parameters from technical models are more volatile than their counterpart form market neutral strategies and calibration must be done in a high-frequency sampling manner to constantly track the currently market situation. As a whole, the goal of this project is to provide an overview of a quantitative approach to investment reviewing basic strategies and illustrating them by means of a back-testing with real financial market data. The sources of the data used in this project are Bloomberg for intraday time series and Yahoo! for daily prices. All numeric computations and graphics used and shown in this project were implemented in MATLAB^R scratch from scratch as a part of this thesis. No other mathematical or statistical software was used.
Resumo:
Due to their performance enhancing properties, use of anabolic steroids (e.g. testosterone, nandrolone, etc.) is banned in elite sports. Therefore, doping control laboratories accredited by the World Anti-Doping Agency (WADA) screen among others for these prohibited substances in urine. It is particularly challenging to detect misuse with naturally occurring anabolic steroids such as testosterone (T), which is a popular ergogenic agent in sports and society. To screen for misuse with these compounds, drug testing laboratories monitor the urinary concentrations of endogenous steroid metabolites and their ratios, which constitute the steroid profile and compare them with reference ranges to detect unnaturally high values. However, the interpretation of the steroid profile is difficult due to large inter-individual variances, various confounding factors and different endogenous steroids marketed that influence the steroid profile in various ways. A support vector machine (SVM) algorithm was developed to statistically evaluate urinary steroid profiles composed of an extended range of steroid profile metabolites. This model makes the interpretation of the analytical data in the quest for deviating steroid profiles feasible and shows its versatility towards different kinds of misused endogenous steroids. The SVM model outperforms the current biomarkers with respect to detection sensitivity and accuracy, particularly when it is coupled to individual data as stored in the Athlete Biological Passport.