882 resultados para VaR Estimation methods, Statistical Methods, Risk managment, Investments
Resumo:
The sparse estimation methods that utilize the l(p)-norm, with p being between 0 and 1, have shown better utility in providing optimal solutions to the inverse problem in diffuse optical tomography. These l(p)-norm-based regularizations make the optimization function nonconvex, and algorithms that implement l(p)-norm minimization utilize approximations to the original l(p)-norm function. In this work, three such typical methods for implementing the l(p)-norm were considered, namely, iteratively reweighted l(1)-minimization (IRL1), iteratively reweighted least squares (IRLS), and the iteratively thresholding method (ITM). These methods were deployed for performing diffuse optical tomographic image reconstruction, and a systematic comparison with the help of three numerical and gelatin phantom cases was executed. The results indicate that these three methods in the implementation of l(p)-minimization yields similar results, with IRL1 fairing marginally in cases considered here in terms of shape recovery and quantitative accuracy of the reconstructed diffuse optical tomographic images. (C) 2014 Optical Society of America
Resumo:
Electromagnetic Articulography (EMA) technique is used to record the kinematics of different articulators while one speaks. EMA data often contains missing segments due to sensor failure. In this work, we propose a maximum a-posteriori (MAP) estimation with continuity constraint to recover the missing samples in the articulatory trajectories recorded using EMA. In this approach, we combine the benefits of statistical MAP estimation as well as the temporal continuity of the articulatory trajectories. Experiments on articulatory corpus using different missing segment durations show that the proposed continuity constraint results in a 30% reduction in average root mean squared error in estimation over statistical estimation of missing segments without any continuity constraint.
Resumo:
The Restricted Boltzmann Machines (RBM) can be used either as classifiers or as generative models. The quality of the generative RBM is measured through the average log-likelihood on test data. Due to the high computational complexity of evaluating the partition function, exact calculation of test log-likelihood is very difficult. In recent years some estimation methods are suggested for approximate computation of test log-likelihood. In this paper we present an empirical comparison of the main estimation methods, namely, the AIS algorithm for estimating the partition function, the CSL method for directly estimating the log-likelihood, and the RAISE algorithm that combines these two ideas.
Resumo:
The desired species identified in this survey include mullets, catfishes, fast growing fish predators, species for control of weeds and grass it, ponds, cichlids and shrimps. Five coastal states: Lagos, Ondo, Bendel, Rivers, and Cross River were covered in the studies. Investigations were also carried into the major rivers and their tributaries. A combination of the estimation methods of Le Cren, (1962) and Pitcher and Mac Donald (1973) was employed in the analysis of data. From the detailed data collected from (1978-1985), the survey indicated that about 100 million fish seeds can be collected annually from Nigerian waters using appropriate gear-seine nets, cast nets, and fish traps. Of this number, 60% is available along the coastal belt of the country while 40% is in the major rivers, their tributaries and swamps. At the present level of fish culture development in Nigeria, this is more than enough, even after allowing for 50% mortality due to handling and transportation stress
Resumo:
Esta dissertação tem por objetivo analisar o comportamento da balança comercial no Brasil no período de 1999 a 2006 e buscar compreender os fatores que contribuíram positivamente para a evolução das exportações, compensando os efeitos negativos advindos do movimento de apreciação da taxa de câmbio ocorrido a partir de 2003. Para tanto, a partir da adaptação de um modelo de oferta e demanda das exportações, elaborado por Goldstein e Khan (1978), utiliza-se dois métodos de estimação para a obtenção das elasticidades com relação às variáveis explicativas do modelo. O primeiro método consiste na estimação de um modelo simultâneo de oferta e demanda das exportações e o segundo método consiste no modelo de cointegração proposto por Engle e Granger. Em ambos os casos, as exportações foram desagregadas por classes de produtos (manufaturados, semimanufaturados e básicos), além do total das exportações.Do lado da demanda, os resultados estimados em ambos os métodos de estimação, tanto para o longo como para o curto prazo, confirmam as hipóteses levantadas ao longo do estudo ou seja, o crescimento dos preços dos produtos exportados, assim como o crescimento da renda mundial, foram bastante relevantes para o crescimento das exportações em todas as classes de produtos analisadas. Em relação à oferta de exportação, a taxa de utilização da capacidade produtiva e os preços dos produtos exportados estiveram co-relacionados positivamente com o quantum ofertado, enquanto que a taxa de câmbio, ao contrário do esperado, apresentou elasticidades negativas.
Resumo:
In 2006, the National Marine Fisheries Service, NOAA, initiated development of a national bycatch report that would provide bycatch estimates for U.S. commercial fisheries at the fishery and species levels for fishes, marine mammals, sea turtles, and seabirds. As part of this project, the need to quantify the relative quality of available bycatch data and estimation methods was identified. Working collaboratively with fisheries managers and scientists across the nation, a system of evaluation was developed. Herein we describe the development of this system (the “tier system”), its components, and its application. We also discuss the value of the tier system in allowing fisheries managers to identify research needs and efficiently allocate limited resources toward those areas that will result in the greatest improvement to bycatch data and estimation quality.
Resumo:
Skeletochronological data on growth changes in humerus diameter were used to estimate the age of Hawaiian green seaturtles ranging from 28.7 to 96.0 cm straight carapace length. Two age estimation methods, correction factor and spline integration, were compared, giving age estimates ranging from 4.1 to 34.6 and from 3.3 to 49.4 yr, respectively, for the sample data. Mean growth rates of Hawaiian green seaturtles are 4–5 cm/yr in early juveniles, decline to a relatively constant rate of about 2 cm/yr by age 10 yr, then decline again to less than 1 cm/yr as turtles near age 30 yr. On average, age estimates from the two techniques differed by just a few years for juvenile turtles, but by wider margins for mature turtles. The spline-integration method models the curvilinear relationship between humerus diameter and the width of periosteal growth increments within the humerus, and offers several advantages over the correction-factor approach.
Resumo:
The contact angles theta of some liquids on ethylene-propylene copolymer-grafted-glycidyl methacrylate (EPM-g-GMA) were measured. The critical surface tensions r(c) of EPM-g-GMA were evaluated by the Zisman Plot (cos theta versus r(L)), Young-Dupre-Good-Girifalco plot (1 + cos theta versus 1/r(L)(0.5)) and log (1 + cos theta) versus log(r(L)) plot. The following results were obtained: the r(c) values varied significantly with the estimation methods. The critical surface tension r(c) decreased with the increase of the degree of grafting of EPM-g-GMA.
Resumo:
基于旋转体的摄像机定位是单目合作目标定位领域中的涉及较少并且较为困难的一个问题,传统的基于点基元、直线基元及曲线基元的定位方法在用于旋转体定位过程中都存在相应的问题.文中设计了一种由4个相切椭圆构成的几何模型,该模型环绕于圆柱体表面,利用二次曲线的投影仍然是二次曲线的特性和椭圆的相应性质能够得到唯一确定模型位置的3个坐标点,从而将旋转体定位问题转化为P3P问题.在对P3P的解模式区域进行分析后,推导了根据模型上可视曲线的弯曲情况来确定P3P问题解模式的判别方法,并给出证明过程.仿真实验表明了这种模型定位方法的有效性.最后利用这个模型引导机械手完成目标定位的实验.
Resumo:
提出一种以顶点的一邻域中三角形在该顶点处的顶角与对应三角形的面积比值加权三角面法矢量估计二维流形三角网格模型顶点法矢量的方法.回顾了现有的五种顶点法矢量估计方法,然后给出了新的方法.设计了利用理论法矢量与估计法矢量的夹角作为误差评价标准的实验,应用球体和椭球体模型分析了所涉及的6种估计方法的性能。
Resumo:
The capacity factors of a series of hydrophobic organic compounds (HOCs) were measured in soil leaching column chromatography (SLCC) on a soil column, and in reversed-phase liquid chromatography on a C-18 column with different volumetric fractions (phi) of methanol in methanol-water mixtures. A general equation of linear solvation energy relationships, log(XYZ) = XYZ(0) + mV(1)/100 + spi* + bbeta(m) + aalpha(m), was applied to analyze capacity factors (k'), soil organic partition coefficients (K-oc) and octanol-water partition coefficients (P). The analyses exhibited high accuracy. The chief solute factors that control log K-oc, log P, and log k' (on soil and on C-18) are the solute size (V-1/100) and hydrogen-bond basicity (beta(m)). Less important solute factors are the dipolarity/polarizability (pi*) and hydrogen-bond acidity (alpha(m)). Log k' on soil and log K-oc have similar signs in four fitting coefficients (m, s, b and a) and similar ratios (m:s:b:a), while log k' on C-18 and log P have similar signs in coefficients (m, s, b and a) and similar ratios (m:s:b:a). Consequently, log k' values on C-18 have good correlations with log P (r > 0.97), while log k' values on soil have good correlations with log K-oc (r > 0.98). Two K-oc estimation methods were developed, one through solute solvatochromic parameters, and the other through correlations with k' on soil. For HOCs, a linear relationship between logarithmic capacity factor and methanol composition in methanol-water mixtures could also be derived in SLCC. (C) 2002 Elsevier Science Ltd. All rights reserved.
Resumo:
We firstly examine the model of Hobson and Rogers for the volatility of a financial asset such as a stock or share. The main feature of this model is the specification of volatility in terms of past price returns. The volatility process and the underlying price process share the same source of randomness and so the model is said to be complete. Complete models are advantageous as they allow a unique, preference independent price for options on the underlying price process. One of the main objectives of the model is to reproduce the `smiles' and `skews' seen in the market implied volatilities and this model produces the desired effect. In the first main piece of work we numerically calibrate the model of Hobson and Rogers for comparison with existing literature. We also develop parameter estimation methods based on the calibration of a GARCH model. We examine alternative specifications of the volatility and show an improvement of model fit to market data based on these specifications. We also show how to process market data in order to take account of inter-day movements in the volatility surface. In the second piece of work, we extend the Hobson and Rogers model in a way that better reflects market structure. We extend the model to take into account both first and second order effects. We derive and numerically solve the pde which describes the price of options under this extended model. We show that this extension allows for a better fit to the market data. Finally, we analyse the parameters of this extended model in order to understand intuitively the role of these parameters in the volatility surface.
Resumo:
p.51-56
Resumo:
Nova V458 Vul erupted on 2007 August 8 and reached a visual magnitude of 8.1 a few days later. Ha images obtained 6 weeks before the outburst as part of the IPHAS Galactic plane survey reveal an 18th magnitude progenitor surrounded by an extended nebula. Subsequent images and spectroscopy of the nebula reveal an inner nebular knot increasing rapidly in brightness due to flash ionization by the nova event. We derive a distance of 13 kpc based on light travel time considerations, which is supported by two other distance estimation methods. The nebula has an ionized mass of 0.2 Msolar and a low expansion velocity: this rules it out as ejecta from a previous nova eruption, and is consistent with it being a ~14,000 year old planetary nebula, probably the product of a prior common envelope (CE) phase of evolution of the binary system. The large derived distance means that the mass of the erupting WD component of the binary is high. We identify two possible evolutionary scenarios, in at least one of which the system is massive enough to produce a Type Ia supernova upon merging.
Resumo:
Background
Inferring gene regulatory networks from large-scale expression data is an important problem that received much attention in recent years. These networks have the potential to gain insights into causal molecular interactions of biological processes. Hence, from a methodological point of view, reliable estimation methods based on observational data are needed to approach this problem practically.
Results
In this paper, we introduce a novel gene regulatory network inference (GRNI) algorithm, called C3NET. We compare C3NET with four well known methods, ARACNE, CLR, MRNET and RN, conducting in-depth numerical ensemble simulations and demonstrate also for biological expression data from E. coli that C3NET performs consistently better than the best known GRNI methods in the literature. In addition, it has also a low computational complexity. Since C3NET is based on estimates of mutual information values in conjunction with a maximization step, our numerical investigations demonstrate that our inference algorithm exploits causal structural information in the data efficiently.
Conclusions
For systems biology to succeed in the long run, it is of crucial importance to establish methods that extract large-scale gene networks from high-throughput data that reflect the underlying causal interactions among genes or gene products. Our method can contribute to this endeavor by demonstrating that an inference algorithm with a neat design permits not only a more intuitive and possibly biological interpretation of its working mechanism but can also result in superior results.