970 resultados para online confidence estimation


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Asset correlations are of critical importance in quantifying portfolio credit risk and economic capitalin financial institutions. Estimation of asset correlation with rating transition data has focusedon the point estimation of the correlation without giving any consideration to the uncertaintyaround these point estimates. In this article we use Bayesian methods to estimate a dynamicfactor model for default risk using rating data (McNeil et al., 2005; McNeil and Wendin, 2007).Bayesian methods allow us to formally incorporate human judgement in the estimation of assetcorrelation, through the prior distribution and fully characterize a confidence set for the correlations.Results indicate: i) a two factor model rather than the one factor model, as proposed bythe Basel II framework, better represents the historical default data. ii) importance of unobservedfactors in this type of models is reinforced and point out that the levels of the implied asset correlationscritically depend on the latent state variable used to capture the dynamics of default,as well as other assumptions on the statistical model. iii) the posterior distributions of the assetcorrelations show that the Basel recommended bounds, for this parameter, undermine the levelof systemic risk.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper considers the problem of estimation when one of a number of populations, assumed normal with known common variance, is selected on the basis of it having the largest observed mean. Conditional on selection of the population, the observed mean is a biased estimate of the true mean. This problem arises in the analysis of clinical trials in which selection is made between a number of experimental treatments that are compared with each other either with or without an additional control treatment. Attempts to obtain approximately unbiased estimates in this setting have been proposed by Shen [2001. An improved method of evaluating drug effect in a multiple dose clinical trial. Statist. Medicine 20, 1913–1929] and Stallard and Todd [2005. Point estimates and confidence regions for sequential trials involving selection. J. Statist. Plann. Inference 135, 402–419]. This paper explores the problem in the simple setting in which two experimental treatments are compared in a single analysis. It is shown that in this case the estimate of Stallard and Todd is the maximum-likelihood estimate (m.l.e.), and this is compared with the estimate proposed by Shen. In particular, it is shown that the m.l.e. has infinite expectation whatever the true value of the mean being estimated. We show that there is no conditionally unbiased estimator, and propose a new family of approximately conditionally unbiased estimators, comparing these with the estimators suggested by Shen.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Undirected graphical models are widely used in statistics, physics and machine vision. However Bayesian parameter estimation for undirected models is extremely challenging, since evaluation of the posterior typically involves the calculation of an intractable normalising constant. This problem has received much attention, but very little of this has focussed on the important practical case where the data consists of noisy or incomplete observations of the underlying hidden structure. This paper specifically addresses this problem, comparing two alternative methodologies. In the first of these approaches particle Markov chain Monte Carlo (Andrieu et al., 2010) is used to efficiently explore the parameter space, combined with the exchange algorithm (Murray et al., 2006) for avoiding the calculation of the intractable normalising constant (a proof showing that this combination targets the correct distribution in found in a supplementary appendix online). This approach is compared with approximate Bayesian computation (Pritchard et al., 1999). Applications to estimating the parameters of Ising models and exponential random graphs from noisy data are presented. Each algorithm used in the paper targets an approximation to the true posterior due to the use of MCMC to simulate from the latent graphical model, in lieu of being able to do this exactly in general. The supplementary appendix also describes the nature of the resulting approximation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Optimal estimation (OE) improves sea surface temperature (SST) estimated from satellite infrared imagery in the “split-window”, in comparison to SST retrieved using the usual multi-channel (MCSST) or non-linear (NLSST) estimators. This is demonstrated using three months of observations of the Advanced Very High Resolution Radiometer (AVHRR) on the first Meteorological Operational satellite (Metop-A), matched in time and space to drifter SSTs collected on the global telecommunications system. There are 32,175 matches. The prior for the OE is forecast atmospheric fields from the Météo-France global numerical weather prediction system (ARPEGE), the forward model is RTTOV8.7, and a reduced state vector comprising SST and total column water vapour (TCWV) is used. Operational NLSST coefficients give mean and standard deviation (SD) of the difference between satellite and drifter SSTs of 0.00 and 0.72 K. The “best possible” NLSST and MCSST coefficients, empirically regressed on the data themselves, give zero mean difference and SDs of 0.66 K and 0.73 K respectively. Significant contributions to the global SD arise from regional systematic errors (biases) of several tenths of kelvin in the NLSST. With no bias corrections to either prior fields or forward model, the SSTs retrieved by OE minus drifter SSTs have mean and SD of − 0.16 and 0.49 K respectively. The reduction in SD below the “best possible” regression results shows that OE deals with structural limitations of the NLSST and MCSST algorithms. Using simple empirical bias corrections to improve the OE, retrieved minus drifter SSTs are obtained with mean and SD of − 0.06 and 0.44 K respectively. Regional biases are greatly reduced, such that the absolute bias is less than 0.1 K in 61% of 10°-latitude by 30°-longitude cells. OE also allows a statistic of the agreement between modelled and measured brightness temperatures to be calculated. We show that this measure is more efficient than the current system of confidence levels at identifying reliable retrievals, and that the best 75% of satellite SSTs by this measure have negligible bias and retrieval error of order 0.25 K.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The use of Bayesian inference in the inference of time-frequency representations has, thus far, been limited to offline analysis of signals, using a smoothing spline based model of the time-frequency plane. In this paper we introduce a new framework that allows the routine use of Bayesian inference for online estimation of the time-varying spectral density of a locally stationary Gaussian process. The core of our approach is the use of a likelihood inspired by a local Whittle approximation. This choice, along with the use of a recursive algorithm for non-parametric estimation of the local spectral density, permits the use of a particle filter for estimating the time-varying spectral density online. We provide demonstrations of the algorithm through tracking chirps and the analysis of musical data.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Nesse artigo, tem-se o interesse em avaliar diferentes estratégias de estimação de parâmetros para um modelo de regressão linear múltipla. Para a estimação dos parâmetros do modelo foram utilizados dados de um ensaio clínico em que o interesse foi verificar se o ensaio mecânico da propriedade de força máxima (EM-FM) está associada com a massa femoral, com o diâmetro femoral e com o grupo experimental de ratas ovariectomizadas da raça Rattus norvegicus albinus, variedade Wistar. Para a estimação dos parâmetros do modelo serão comparadas três metodologias: a metodologia clássica, baseada no método dos mínimos quadrados; a metodologia Bayesiana, baseada no teorema de Bayes; e o método Bootstrap, baseado em processos de reamostragem.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A indústria de serviços online é caracterizada por um volume alto de Fusões e Aquisições no período de 2005 a 2015. As líderes de mercado, Apple, Google e Microsoft, incorporaram essa forma de crescimento inorgânico em suas estratégias corporativas. Essa tese examina as atividades de Fusões e Aquisições dessas três empresas. Consequentemente, ela tem foco em dois aspectos principais. Primeiro, existe o objetivo de saciar uma escassez na literatura acadêmica, no que se diz respeito ao estabelecimento de uma conexão entre a estratégia corporativa dessas empresas e as decisões tomadas de Fusões e Aquisições. Segundo, há também o objetivo de estimar possíveis futuros desenvolvimentos no setor. Através de uma análise de conteúdo qualitativa das publicações das empresas, relatórios de análise de mercado, e outros conteúdos de terceiros, estudos de caso foram desenvolvidos. Os resultados mostram o processo de posicionamento estratégico por parte da Apple, Google e Microsoft, dentro do mercado de serviços online, entre os anos de 2005 e 2015. As recorrentes fusões e aquisições são analisadas, no que se diz respeito as estratégias corporativas dessas empresas e a responsividade perante as atividades de seus competidores. Os resultados evidenciam atividades agressivas de Fusões e Aquisições em grupos estratégicos em comum entre as três empresas, especialmente no mercado de aparelhos de comunicação móvel e serviços de comunicação.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A partir de perfis populacionais experimentais de linhagens do díptero forídeo Megaselia scalaris, foi determinado o número mínimo de perfis amostrais que devem ser repetidos, via processo de simulação bootstrap, para se ter uma estimativa confiável do perfil médio populacional e apresentar estimativas do erro-padrão como medida da precisão das simulações realizadas. Os dados originais são provenientes de populações experimentais fundadas com as linhagens SR e R4, com três réplicas cada, e que foram mantidas por 33 semanas pela técnica da transferência seriada em câmara de temperatura constante (25 ± 1,0ºC). A variável usada foi tamanho populacional e o modelo adotado para cada perfíl foi o de um processo estocástico estacionário. Por meio das simulações, os perfis de três populações experimentais foram amplificados, determinando-se, dessa forma, o tamanho mínimo de amostra. Fixado o tamanho de amostra, simulações bootstrap foram realizadas para construção de intervalos de confiança e comparação dos perfis médios populacionais das duas linhagens. Os resultados mostram que com o tamanho de amostra igual a 50 inicia-se o processo de estabilização dos valores médios.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A flow injection spectrophotometric system is proposed for phosphite determination in fertilizers by the molybdenum blue method after the processing of each sample two times on-line without and with an oxidizing step. The flow system was designed to add sulfuric acid or permanganate solutions alternately into the system by simply displacing the injector-commutator from one resting position to another, allowing the determination of phosphate and total phosphate, respectively. The concentration of phosphite is obtained then by difference between the two measurents. The influence of flow rates, sample volume, and dimension of flow line connecting the injector-commutator to the main analytical channel was evaluated. The proposed method was applied to phosphite determination in commercial liquid fertilizers. Results obtained with the proposed FIA system were not statistically different from those obtained by titrimetry at the 95% confidence level. In addition, recoveries within 94 and 100% of spiked fertilizers were found. The relative standard deviation (n = 12) related to the phosphite-converted-phosphate peak alone was <= 3.5% for 800 mg L-1 P (phoshite) solution. Precision due to the differences of total phosphate and phosphate was 1.1% for 10 mg L-1 P (phosphate) + 3000 mg L-1 P (phosphite) solution. The sampling rate was calculated as 15 determinations per hour, and the reagent consumption was about 6.3 mg of KMnO4, 200 mg of (NH4)(6)Mo7O24 center dot 4H(2)O, and 40 mg of ascorbic acid per measurement.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A flow injection system with online sample preparation is proposed for the determination of phosphite in liquid fertilizers by spectrophotometry. After loop-based injection, phosphite is oxidized by an acidic permanganate solution (1.0 10(-2) mol L-1 KMnO4 + 1.0 mol L-1 H2SO4) in a heated reactor (50 degreesC). The phosphate generated is then determined by the molybdenum blue method. Influence of flow rates, temperature, and concentration and order of addition of reagents, sample volume, and reactor configuration for the blue complex formation on recorded signals were investigated. The pow system was applied to phosphite determination in commercial samples of liquid fertilizers. The proposed system handles about 80 samples per hour [0.05-0.40% (w/v) H3PO3; R = 0,9998], consuming about 80 muL sample, 1 mg KMnO4, 25 mg (NH)(6)Mo7O24, and Ia mg ascorbic acid per determination. Results are precise [relative standard deviation less than or equal to 3.5% for 0.1% (w/v) H3PO3, n = 12] and in agreement with those obtained by gravimetry at 95% confidence level. (C) 2000 John Wiley & Sons, Inc.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this article we examine an inverse heat convection problem of estimating unknown parameters of a parameterized variable boundary heat flux. The physical problem is a hydrodynamically developed, thermally developing, three-dimensional steady state laminar flow of a Newtonian fluid inside a circular sector duct, insulated in the flat walls and subject to unknown wall heat flux at the curved wall. Results are presented for polynomial and sinusoidal trial functions, and the unknown parameters as well as surface heat fluxes are determined. Depending on the nature of the flow, on the position of experimental points the inverse problem sometimes could not be solved. Therefore, an identification condition is defined to specify a condition under which the inverse problem can be solved. Once the parameters have been computed it is possible to obtain the statistical significance of the inverse problem solution. Therefore, approximate confidence bounds based on standard statistical linear procedure, for the estimated parameters, are analyzed and presented.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The aim of this study is to survey radiographic measurement estimation in the assessment of dental implant length according to dentists' confidence. A 19-point questionnaire with closed-ended questions was used by two graduate students to interview 69 dentists during a dental implant meeting. Included were 12 questions related to over- and underestimation in three radiographic modalities: panoramic (P), conventional tomography (T), and computerized tomography (CT). The database was analyzed by Epi-Info 6.04 software and the values from two radiographic modalities, P and T, were compared using a chi2 test. The results showed that 38.24% of the dentists' confidence was in the overestimation of measurements in P, 30.56% in T, and 0% in CT. On the other hand, considering the underestimated measurements, the percentages were 47.06% in P, 33.33% in T, and 1.92% in CT. The frequency of under- and overestimation were statistically significant (chi2 = 6.32; P = .0425) between P and T. CT was the radiographic modality with higher measurement precision according to dentists' confidence. In conclusion, the interviewed dentists felt that CT was the best radiographic modality when considering the measurement estimation precision in preoperative dental implant assessment.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Defining pharmacokinetic parameters and depletion intervals for antimicrobials used in fish represents important guidelines for future regulation by Brazilian agencies of the use of these substances in fish farming. This article presents a depletion study for oxytetracycline (OTC) in tilapias (Orechromis niloticus) farmed under tropical conditions during the winter season. High performance liquid chromatography, with fluorescence detection for the quantitation of OTC in tilapia fillets and medicated feed, was developed and validated. The depletion study with fish was carried out under monitored environmental conditions. OTC was administered in the feed for five consecutive days at daily dosages of 80 mg/kg body weight. Groups of ten fish were slaughtered at 1, 2, 3, 4, 5, 8, 10, 15, 20, and 25 days after medication. After the 8th day posttreatment, OTC concentrations in the tilapia fillets were below the limit of quantitation (13 ng/g) of the method. Linear regression of the mathematical model of data analysis presented a coefficient of 0.9962. The elimination half- life for OTC in tilapia fillet and the withdrawal period were 1.65 and 6 days, respectively, considering a percentile of 99% with 95% of confidence and a maximum residue limit of 100 ng/g. Even though the study was carried out in the winter under practical conditions where water temperature varied, the results obtained are similar to others from studies conducted under controlled temperature.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Under a two-level hierarchical model, suppose that the distribution of the random parameter is known or can be estimated well. Data are generated via a fixed, but unobservable realization of this parameter. In this paper, we derive the smallest confidence region of the random parameter under a joint Bayesian/frequentist paradigm. On average this optimal region can be much smaller than the corresponding Bayesian highest posterior density region. The new estimation procedure is appealing when one deals with data generated under a highly parallel structure, for example, data from a trial with a large number of clinical centers involved or genome-wide gene-expession data for estimating individual gene- or center-specific parameters simultaneously. The new proposal is illustrated with a typical microarray data set and its performance is examined via a small simulation study.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Amplifications and deletions of chromosomal DNA, as well as copy-neutral loss of heterozygosity have been associated with diseases processes. High-throughput single nucleotide polymorphism (SNP) arrays are useful for making genome-wide estimates of copy number and genotype calls. Because neighboring SNPs in high throughput SNP arrays are likely to have dependent copy number and genotype due to the underlying haplotype structure and linkage disequilibrium, hidden Markov models (HMM) may be useful for improving genotype calls and copy number estimates that do not incorporate information from nearby SNPs. We improve previous approaches that utilize a HMM framework for inference in high throughput SNP arrays by integrating copy number, genotype calls, and the corresponding confidence scores when available. Using simulated data, we demonstrate how confidence scores control smoothing in a probabilistic framework. Software for fitting HMMs to SNP array data is available in the R package ICE.