954 resultados para ROBUST ESTIMATION
Resumo:
Among synthetic vaccines, virus-like particles (VLPs) are used for their ability to induce strong humoral responses. Very little is reported on VLP-based-vaccine-induced CD4(+) T-cell responses, despite the requirement of helper T cells for antibody isotype switching. Further knowledge on helper T cells is also needed for optimization of CD8(+) T-cell vaccination. Here, we analysed human CD4(+) T-cell responses to vaccination with MelQbG10, which is a Qβ-VLP covalently linked to a long peptide derived from the melanoma self-antigen Melan-A. In all analysed patients, we found strong antibody responses of mainly IgG1 and IgG3 isotypes, and concomitant Th1-biased CD4(+) T-cell responses specific for Qβ. Although less strong, comparable B- and CD4(+) T-cell responses were also found specific for the Melan-A cargo peptide. Further optimization is required to shift the response more towards the cargo peptide. Nevertheless, the data demonstrate the high potential of VLPs for inducing humoral and cellular immune responses by mounting powerful CD4(+) T-cell help.
Resumo:
The Aitchison vector space structure for the simplex is generalized to a Hilbert space structure A2(P) for distributions and likelihoods on arbitrary spaces. Centralnotations of statistics, such as Information or Likelihood, can be identified in the algebraical structure of A2(P) and their corresponding notions in compositional data analysis, such as Aitchison distance or centered log ratio transform.In this way very elaborated aspects of mathematical statistics can be understoodeasily in the light of a simple vector space structure and of compositional data analysis. E.g. combination of statistical information such as Bayesian updating,combination of likelihood and robust M-estimation functions are simple additions/perturbations in A2(Pprior). Weighting observations corresponds to a weightedaddition of the corresponding evidence.Likelihood based statistics for general exponential families turns out to have aparticularly easy interpretation in terms of A2(P). Regular exponential families formfinite dimensional linear subspaces of A2(P) and they correspond to finite dimensionalsubspaces formed by their posterior in the dual information space A2(Pprior).The Aitchison norm can identified with mean Fisher information. The closing constant itself is identified with a generalization of the cummulant function and shown to be Kullback Leiblers directed information. Fisher information is the local geometry of the manifold induced by the A2(P) derivative of the Kullback Leibler information and the space A2(P) can therefore be seen as the tangential geometry of statistical inference at the distribution P.The discussion of A2(P) valued random variables, such as estimation functionsor likelihoods, give a further interpretation of Fisher information as the expected squared norm of evidence and a scale free understanding of unbiased reasoning
Resumo:
Many definitions and debates exist about the core characteristics of social and solidarity economy (SSE) and its actors. Among others, legal forms, profit, geographical scope, and size as criteria for identifying SSE actors often reveal dissents among SSE scholars. Instead of using a dichotomous, either-in-or-out definition of SSE actors, this paper presents an assessment tool that takes into account multiple dimensions to offer a more comprehensive and nuanced view of the field. We first define the core dimensions of the assessment tool by synthesizing the multiple indicators found in the literature. We then empirically test these dimensions and their interrelatedness and seek to identify potential clusters of actors. Finally we discuss the practical implications of our model.
Resumo:
BACKGROUND: Creatinine clearance is the most common method used to assess glomerular filtration rate (GFR). In children, GFR can also be estimated without urine collection, using the formula GFR (mL/min x 1.73 m2) = K x height [cm]/Pcr [mumol/L]), where Pcr represents the plasma creatinine concentration. K is usually calculated using creatinine clearance (Ccr) as an index of GFR. The aim of the present study was to evaluate the reliability of the formula, using the standard UV/P inulin clearance to calculate K. METHODS: Clearance data obtained in 200 patients (1 month to 23 years) during the years 1988-1994 were used to calculate the factor K as a function of age. Forty-four additional patients were studied prospectively in conditions of either hydropenia or water diuresis in order to evaluate the possible variation of K as a function of urine flow rate. RESULTS: When GFR was estimated by the standard inulin clearance, the calculated values of K was 39 (infants less than 6 months), 44 (1-2 years) and 47 (2-12 years). The correlation between the values of GFR, as estimated by the formula, and the values measured by the standard clearance of inulin was highly significant; the scatter of individual values was however substantial. When K was calculated using Ccr, the formula overestimated Cin at all urine flow rates. When calculated from Ccr, K varied as a function of urine flow rate (K = 50 at urine flow rates of 3.5 and K = 64 at urine flow rates of 8.5 mL/min x 1.73 m2). When calculated from Cin, in the same conditions, K remained constant with a value of 50. CONCLUSIONS: The formula GFR = K x H/Pcr can be used to estimate GFR. The scatter of values precludes however the use of the formula to estimate GFR in pathophysiological studies. The formula should only be used when K is calculated from Cin, and the plasma creatinine concentration is measured in well defined conditions of hydration.
Resumo:
Nonlinear regression problems can often be reduced to linearity by transforming the response variable (e.g., using the Box-Cox family of transformations). The classic estimates of the parameter defining the transformation as well as of the regression coefficients are based on the maximum likelihood criterion, assuming homoscedastic normal errors for the transformed response. These estimates are nonrobust in the presence of outliers and can be inconsistent when the errors are nonnormal or heteroscedastic. This article proposes new robust estimates that are consistent and asymptotically normal for any unimodal and homoscedastic error distribution. For this purpose, a robust version of conditional expectation is introduced for which the prediction mean squared error is replaced with an M scale. This concept is then used to develop a nonparametric criterion to estimate the transformation parameter as well as the regression coefficients. A finite sample estimate of this criterion based on a robust version of smearing is also proposed. Monte Carlo experiments show that the new estimates compare favorably with respect to the available competitors.
Resumo:
PURPOSE: To suppress the noise, by sacrificing some of the signal homogeneity for numerical stability, in uniform T1 weighted (T1w) images obtained with the magnetization prepared 2 rapid gradient echoes sequence (MP2RAGE) and to compare the clinical utility of these robust T1w images against the uniform T1w images. MATERIALS AND METHODS: 8 healthy subjects (29.0±4.1 years; 6 Male), who provided written consent, underwent two scan sessions within a 24 hour period on a 7T head-only scanner. The uniform and robust T1w image volumes were calculated inline on the scanner. Two experienced radiologists qualitatively rated the images for: general image quality; 7T specific artefacts; and, local structure definition. Voxel-based and volume-based morphometry packages were used to compare the segmentation quality between the uniform and robust images. Statistical differences were evaluated by using a positive sided Wilcoxon rank test. RESULTS: The robust image suppresses background noise inside and outside the skull. The inhomogeneity introduced was ranked as mild. The robust image was significantly ranked higher than the uniform image for both observers (observer 1/2, p-value = 0.0006/0.0004). In particular, an improved delineation of the pituitary gland, cerebellar lobes was observed in the robust versus uniform T1w image. The reproducibility of the segmentation results between repeat scans improved (p-value = 0.0004) from an average volumetric difference across structures of ≈6.6% to ≈2.4% for the uniform image and robust T1w image respectively. CONCLUSIONS: The robust T1w image enables MP2RAGE to produce, clinically familiar T1w images, in addition to T1 maps, which can be readily used in uniform morphometry packages.
Resumo:
This paper proposes to estimate the covariance matrix of stock returnsby an optimally weighted average of two existing estimators: the samplecovariance matrix and single-index covariance matrix. This method isgenerally known as shrinkage, and it is standard in decision theory andin empirical Bayesian statistics. Our shrinkage estimator can be seenas a way to account for extra-market covariance without having to specifyan arbitrary multi-factor structure. For NYSE and AMEX stock returns from1972 to 1995, it can be used to select portfolios with significantly lowerout-of-sample variance than a set of existing estimators, includingmulti-factor models.
Resumo:
In this article we propose using small area estimators to improve the estimatesof both the small and large area parameters. When the objective is to estimateparameters at both levels accurately, optimality is achieved by a mixed sampledesign of fixed and proportional allocations. In the mixed sample design, oncea sample size has been determined, one fraction of it is distributedproportionally among the different small areas while the rest is evenlydistributed among them. We use Monte Carlo simulations to assess theperformance of the direct estimator and two composite covariant-freesmall area estimators, for different sample sizes and different sampledistributions. Performance is measured in terms of Mean Squared Errors(MSE) of both small and large area parameters. It is found that the adoptionof small area composite estimators open the possibility of 1) reducingsample size when precision is given, or 2) improving precision for a givensample size.
Resumo:
Agent-based computational economics is becoming widely used in practice. This paperexplores the consistency of some of its standard techniques. We focus in particular on prevailingwholesale electricity trading simulation methods. We include different supply and demandrepresentations and propose the Experience-Weighted Attractions method to include severalbehavioural algorithms. We compare the results across assumptions and to economic theorypredictions. The match is good under best-response and reinforcement learning but not underfictitious play. The simulations perform well under flat and upward-slopping supply bidding,and also for plausible demand elasticity assumptions. Learning is influenced by the number ofbids per plant and the initial conditions. The overall conclusion is that agent-based simulationassumptions are far from innocuous. We link their performance to underlying features, andidentify those that are better suited to model wholesale electricity markets.
Resumo:
We introduce a simple new hypothesis testing procedure, which,based on an independent sample drawn from a certain density, detects which of $k$ nominal densities is the true density is closest to, under the total variation (L_{1}) distance. Weobtain a density-free uniform exponential bound for the probability of false detection.
Resumo:
We analyze the effects of neutral and investment-specific technology shockson hours and output. Long cycles in hours are captured in a variety of ways.Hours robustly fall in response to neutral shocks and robustly increase inresponse to investment specific shocks. The percentage of the variance ofhours (output) explained by neutral shocks is small (large); the opposite istrue for investment specific shocks. News shocks are uncorrelated with theestimated technology shocks.