976 resultados para function estimation


Relevância:

30.00% 30.00%

Publicador:

Resumo:

BACKGROUND: Creatinine clearance is the most common method used to assess glomerular filtration rate (GFR). In children, GFR can also be estimated without urine collection, using the formula GFR (mL/min x 1.73 m2) = K x height [cm]/Pcr [mumol/L]), where Pcr represents the plasma creatinine concentration. K is usually calculated using creatinine clearance (Ccr) as an index of GFR. The aim of the present study was to evaluate the reliability of the formula, using the standard UV/P inulin clearance to calculate K. METHODS: Clearance data obtained in 200 patients (1 month to 23 years) during the years 1988-1994 were used to calculate the factor K as a function of age. Forty-four additional patients were studied prospectively in conditions of either hydropenia or water diuresis in order to evaluate the possible variation of K as a function of urine flow rate. RESULTS: When GFR was estimated by the standard inulin clearance, the calculated values of K was 39 (infants less than 6 months), 44 (1-2 years) and 47 (2-12 years). The correlation between the values of GFR, as estimated by the formula, and the values measured by the standard clearance of inulin was highly significant; the scatter of individual values was however substantial. When K was calculated using Ccr, the formula overestimated Cin at all urine flow rates. When calculated from Ccr, K varied as a function of urine flow rate (K = 50 at urine flow rates of 3.5 and K = 64 at urine flow rates of 8.5 mL/min x 1.73 m2). When calculated from Cin, in the same conditions, K remained constant with a value of 50. CONCLUSIONS: The formula GFR = K x H/Pcr can be used to estimate GFR. The scatter of values precludes however the use of the formula to estimate GFR in pathophysiological studies. The formula should only be used when K is calculated from Cin, and the plasma creatinine concentration is measured in well defined conditions of hydration.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We set up a dynamic model of firm investment in which liquidity constraintsenter explicity into the firm's maximization problem. The optimal policyrules are incorporated into a maximum likelihood procedure which estimatesthe structural parameters of the model. Investment is positively related tothe firm's internal financial position when the firm is relatively poor. This relationship disappears for wealthy firms, which can reach theirdesired level of investment. Borrowing is an increasing function of financial position for poor firms. This relationship is reversed as a firm's financial position improves, and large firms hold little debt.Liquidity constrained firms may be unused credits lines and the capacity toinvest further if they desire. However the fear that liquidity constraintswill become binding in the future induces them to invest only when internalresources increase.We estimate the structural parameters of the model and use them to quantifythe importance of liquidity constraints on firms' investment. We find thatliquidity constraints matter significantly for the investment decisions of firms. If firms can finance investment by issuing fresh equity, rather than with internal funds or debt, average capital stock is almost 35% higher overa period of 20 years. Transitory shocks to internal funds have a sustained effect on the capital stock. This effect lasts for several periods and ismore persistent for small firms than for large firms. A 10% negative shock to firm fundamentals reduces the capital stock of firms which face liquidityconstraints by almost 8% over a period as opposed to only 3.5% for firms which do not face these constraints.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A new parametric minimum distance time-domain estimator for ARFIMA processes is introduced in this paper. The proposed estimator minimizes the sum of squared correlations of residuals obtained after filtering a series through ARFIMA parameters. The estimator iseasy to compute and is consistent and asymptotically normally distributed for fractionallyintegrated (FI) processes with an integration order d strictly greater than -0.75. Therefore, it can be applied to both stationary and non-stationary processes. Deterministic components are also allowed in the DGP. Furthermore, as a by-product, the estimation procedure provides an immediate check on the adequacy of the specified model. This is so because the criterion function, when evaluated at the estimated values, coincides with the Box-Pierce goodness of fit statistic. Empirical applications and Monte-Carlo simulations supporting the analytical results and showing the good performance of the estimator in finite samples are also provided.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We study model selection strategies based on penalized empirical loss minimization. We point out a tight relationship between error estimation and data-based complexity penalization: any good error estimate may be converted into a data-based penalty function and the performance of the estimate is governed by the quality of the error estimate. We consider several penalty functions, involving error estimates on independent test data, empirical {\sc vc} dimension, empirical {\sc vc} entropy, andmargin-based quantities. We also consider the maximal difference between the error on the first half of the training data and the second half, and the expected maximal discrepancy, a closely related capacity estimate that can be calculated by Monte Carlo integration. Maximal discrepancy penalty functions are appealing for pattern classification problems, since their computation is equivalent to empirical risk minimization over the training data with some labels flipped.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Two methods were evaluated for scaling a set of semivariograms into a unified function for kriging estimation of field-measured properties. Scaling is performed using sample variances and sills of individual semivariograms as scale factors. Theoretical developments show that kriging weights are independent of the scaling factor which appears simply as a constant multiplying both sides of the kriging equations. The scaling techniques were applied to four sets of semivariograms representing spatial scales of 30 x 30 m to 600 x 900 km. Experimental semivariograms in each set successfully coalesced into a single curve by variances and sills of individual semivariograms. To evaluate the scaling techniques, kriged estimates derived from scaled semivariogram models were compared with those derived from unscaled models. Differences in kriged estimates of the order of 5% were found for the cases in which the scaling technique was not successful in coalescing the individual semivariograms, which also means that the spatial variability of these properties is different. The proposed scaling techniques enhance interpretation of semivariograms when a variety of measurements are made at the same location. They also reduce computational times for kriging estimations because kriging weights only need to be calculated for one variable. Weights remain unchanged for all other variables in the data set whose semivariograms are scaled.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The purposes of this study were to characterize the performance of a 3-dimensional (3D) ordered-subset expectation maximization (OSEM) algorithm in the quantification of left ventricular (LV) function with (99m)Tc-labeled agent gated SPECT (G-SPECT), the QGS program, and a beating-heart phantom and to optimize the reconstruction parameters for clinical applications. METHODS: A G-SPECT image of a dynamic heart phantom simulating the beating left ventricle was acquired. The exact volumes of the phantom were known and were as follows: end-diastolic volume (EDV) of 112 mL, end-systolic volume (ESV) of 37 mL, and stroke volume (SV) of 75 mL; these volumes produced an LV ejection fraction (LVEF) of 67%. Tomographic reconstructions were obtained after 10-20 iterations (I) with 4, 8, and 16 subsets (S) at full width at half maximum (FWHM) gaussian postprocessing filter cutoff values of 8-15 mm. The QGS program was used for quantitative measurements. RESULTS: Measured values ranged from 72 to 92 mL for EDV, from 18 to 32 mL for ESV, and from 54 to 63 mL for SV, and the calculated LVEF ranged from 65% to 76%. Overall, the combination of 10 I, 8 S, and a cutoff filter value of 10 mm produced the most accurate results. The plot of the measures with respect to the expectation maximization-equivalent iterations (I x S product) revealed a bell-shaped curve for the LV volumes and a reverse distribution for the LVEF, with the best results in the intermediate range. In particular, FWHM cutoff values exceeding 10 mm affected the estimation of the LV volumes. CONCLUSION: The QGS program is able to correctly calculate the LVEF when used in association with an optimized 3D OSEM algorithm (8 S, 10 I, and FWHM of 10 mm) but underestimates the LV volumes. However, various combinations of technical parameters, including a limited range of I and S (80-160 expectation maximization-equivalent iterations) and low cutoff values (< or =10 mm) for the gaussian postprocessing filter, produced results with similar accuracies and without clinically relevant differences in the LV volumes and the estimated LVEF.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We propose an iterative procedure to minimize the sum of squares function which avoids the nonlinear nature of estimating the first order moving average parameter and provides a closed form of the estimator. The asymptotic properties of the method are discussed and the consistency of the linear least squares estimator is proved for the invertible case. We perform various Monte Carlo experiments in order to compare the sample properties of the linear least squares estimator with its nonlinear counterpart for the conditional and unconditional cases. Some examples are also discussed

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Although the determination of remaining phosphorus (Prem) is simple, accurate values could also be estimated with a pedotransfer function (PTF) aiming at the additional use of soil analysis data and/or Prem replacement by an even simpler determination. The purpose of this paper was to develop a pedotransfer function to estimate Prem values of soils of the State of São Paulo based on properties with easier or routine laboratory determination. A pedotransfer function was developed by artificial neural networks (ANN) from a database of Prem values, pH values measured in 1 mol L-1 NaF solution (pH NaF) and soil chemical and physical properties of samples collected during soil classification activities carried out in the State of São Paulo by the Agronomic Institute of Campinas (IAC). Furthermore, a pedotransfer function was developed by regressing Prem values against the same predictor variables of the ANN-based PTF. Results showed that Prem values can be calculated more accurately with the ANN-based pedotransfer function with the input variables pH NaF values along with the sum of exchangeable bases (SB) and the exchangeable aluminum (Al3+) soil content. In addition, the accuracy of the Prem estimates by ANN-based PTF were more sensitive to increases in the experimental database size. Although the database used in this study was not comprehensive enough for the establishment of a definitive pedotrasnfer function for Prem estimation, results indicated the inclusion of Prem and pH NaF measurements among the soil testing evaluations as promising ind order to provide a greater database for the development of an ANN-based pedotransfer function for accurate Prem estimates from pH NaF, SB, and Al3+ values.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Detailed knowledge on water percolation into the soil in irrigated areas is fundamental for solving problems of drainage, pollution and the recharge of underground aquifers. The aim of this study was to evaluate the percolation estimated by time-domain-reflectometry (TDR) in a drainage lysimeter. We used Darcy's law with K(θ) functions determined by field and laboratory methods and by the change in water storage in the soil profile at 16 points of moisture measurement at different time intervals. A sandy clay soil was saturated and covered with plastic sheet to prevent evaporation and an internal drainage trial in a drainage lysimeter was installed. The relationship between the observed and estimated percolation values was evaluated by linear regression analysis. The results suggest that percolation in the field or laboratory can be estimated based on continuous monitoring with TDR, and at short time intervals, of the variations in soil water storage. The precision and accuracy of this approach are similar to those of the lysimeter and it has advantages over the other evaluated methods, of which the most relevant are the possibility of estimating percolation in short time intervals and exemption from the predetermination of soil hydraulic properties such as water retention and hydraulic conductivity. The estimates obtained by the Darcy-Buckingham equation for percolation levels using function K(θ) predicted by the method of Hillel et al. (1972) provided compatible water percolation estimates with those obtained in the lysimeter at time intervals greater than 1 h. The methods of Libardi et al. (1980), Sisson et al. (1980) and van Genuchten (1980) underestimated water percolation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Field capacity (FC) is a parameter widely used in applied soil science. However, its in situ method of determination may be difficult to apply, generally because of the need of large supplies of water at the test sites. Ottoni Filho et al. (2014) proposed a standardized procedure for field determination of FC and showed that such in situ FC can be estimated by a linear pedotransfer function (PTF) based on volumetric soil water content at the matric potential of -6 kPa [θ(6)] for the same soils used in the present study. The objective of this study was to use soil moisture data below a double ring infiltrometer measured 48 h after the end of the infiltration test in order to develop PTFs for standard in situ FC. We found that such ring FC data were an average of 0.03 m³ m- 3 greater than standard FC values. The linear PTF that was developed for the ring FC data based only on θ(6) was nearly as accurate as the equivalent PTF reported by Ottoni Filho et al. (2014), which was developed for the standard FC data. The root mean squared residues of FC determined from both PTFs were about 0.02 m³ m- 3. The proposed method has the advantage of estimating the soil in situ FC using the water applied in the infiltration test.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Estimation of soil load-bearing capacity from mathematical models that relate preconsolidation pressure (σp) to mechanical resistance to penetration (PR) and gravimetric soil water content (U) is important for defining strategies to prevent compaction of agricultural soils. Our objective was therefore to model the σp and compression index (CI) according to the PR (with an impact penetrometer in the field and a static penetrometer inserted at a constant rate in the laboratory) and U in a Rhodic Eutrudox. The experiment consisted of six treatments: no-tillage system (NT); NT with chiseling; and NT with additional compaction by combine traffic (passing 4, 8, 10, and 20 times). Soil bulk density, total porosity, PR (in field and laboratory measurements), U, σp, and CI values were determined in the 5.5-10.5 cm and 13.5-18.5 cm layers. Preconsolidation pressure (σp) and CI were modeled according to PR in different U. The σp increased and the CI decreased linearly with increases in the PR values. The correlations between σp and PR and PR and CI are influenced by U. From these correlations, the soil load-bearing capacity and compaction susceptibility can be estimated by PR readings evaluated in different U.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Preface The starting point for this work and eventually the subject of the whole thesis was the question: how to estimate parameters of the affine stochastic volatility jump-diffusion models. These models are very important for contingent claim pricing. Their major advantage, availability T of analytical solutions for characteristic functions, made them the models of choice for many theoretical constructions and practical applications. At the same time, estimation of parameters of stochastic volatility jump-diffusion models is not a straightforward task. The problem is coming from the variance process, which is non-observable. There are several estimation methodologies that deal with estimation problems of latent variables. One appeared to be particularly interesting. It proposes the estimator that in contrast to the other methods requires neither discretization nor simulation of the process: the Continuous Empirical Characteristic function estimator (EGF) based on the unconditional characteristic function. However, the procedure was derived only for the stochastic volatility models without jumps. Thus, it has become the subject of my research. This thesis consists of three parts. Each one is written as independent and self contained article. At the same time, questions that are answered by the second and third parts of this Work arise naturally from the issues investigated and results obtained in the first one. The first chapter is the theoretical foundation of the thesis. It proposes an estimation procedure for the stochastic volatility models with jumps both in the asset price and variance processes. The estimation procedure is based on the joint unconditional characteristic function for the stochastic process. The major analytical result of this part as well as of the whole thesis is the closed form expression for the joint unconditional characteristic function for the stochastic volatility jump-diffusion models. The empirical part of the chapter suggests that besides a stochastic volatility, jumps both in the mean and the volatility equation are relevant for modelling returns of the S&P500 index, which has been chosen as a general representative of the stock asset class. Hence, the next question is: what jump process to use to model returns of the S&P500. The decision about the jump process in the framework of the affine jump- diffusion models boils down to defining the intensity of the compound Poisson process, a constant or some function of state variables, and to choosing the distribution of the jump size. While the jump in the variance process is usually assumed to be exponential, there are at least three distributions of the jump size which are currently used for the asset log-prices: normal, exponential and double exponential. The second part of this thesis shows that normal jumps in the asset log-returns should be used if we are to model S&P500 index by a stochastic volatility jump-diffusion model. This is a surprising result. Exponential distribution has fatter tails and for this reason either exponential or double exponential jump size was expected to provide the best it of the stochastic volatility jump-diffusion models to the data. The idea of testing the efficiency of the Continuous ECF estimator on the simulated data has already appeared when the first estimation results of the first chapter were obtained. In the absence of a benchmark or any ground for comparison it is unreasonable to be sure that our parameter estimates and the true parameters of the models coincide. The conclusion of the second chapter provides one more reason to do that kind of test. Thus, the third part of this thesis concentrates on the estimation of parameters of stochastic volatility jump- diffusion models on the basis of the asset price time-series simulated from various "true" parameter sets. The goal is to show that the Continuous ECF estimator based on the joint unconditional characteristic function is capable of finding the true parameters. And, the third chapter proves that our estimator indeed has the ability to do so. Once it is clear that the Continuous ECF estimator based on the unconditional characteristic function is working, the next question does not wait to appear. The question is whether the computation effort can be reduced without affecting the efficiency of the estimator, or whether the efficiency of the estimator can be improved without dramatically increasing the computational burden. The efficiency of the Continuous ECF estimator depends on the number of dimensions of the joint unconditional characteristic function which is used for its construction. Theoretically, the more dimensions there are, the more efficient is the estimation procedure. In practice, however, this relationship is not so straightforward due to the increasing computational difficulties. The second chapter, for example, in addition to the choice of the jump process, discusses the possibility of using the marginal, i.e. one-dimensional, unconditional characteristic function in the estimation instead of the joint, bi-dimensional, unconditional characteristic function. As result, the preference for one or the other depends on the model to be estimated. Thus, the computational effort can be reduced in some cases without affecting the efficiency of the estimator. The improvement of the estimator s efficiency by increasing its dimensionality faces more difficulties. The third chapter of this thesis, in addition to what was discussed above, compares the performance of the estimators with bi- and three-dimensional unconditional characteristic functions on the simulated data. It shows that the theoretical efficiency of the Continuous ECF estimator based on the three-dimensional unconditional characteristic function is not attainable in practice, at least for the moment, due to the limitations on the computer power and optimization toolboxes available to the general public. Thus, the Continuous ECF estimator based on the joint, bi-dimensional, unconditional characteristic function has all the reasons to exist and to be used for the estimation of parameters of the stochastic volatility jump-diffusion models.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We propose an iterative procedure to minimize the sum of squares function which avoids the nonlinear nature of estimating the first order moving average parameter and provides a closed form of the estimator. The asymptotic properties of the method are discussed and the consistency of the linear least squares estimator is proved for the invertible case. We perform various Monte Carlo experiments in order to compare the sample properties of the linear least squares estimator with its nonlinear counterpart for the conditional and unconditional cases. Some examples are also discussed

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Single amino acid substitution is the type of protein alteration most related to human diseases. Current studies seek primarily to distinguish neutral mutations from harmful ones. Very few methods offer an explanation of the final prediction result in terms of the probable structural or functional effect on the protein. In this study, we describe the use of three novel parameters to identify experimentally-verified critical residues of the TP53 protein (p53). The first two parameters make use of a surface clustering method to calculate the protein surface area of highly conserved regions or regions with high nonlocal atomic interaction energy (ANOLEA) score. These parameters help identify important functional regions on the surface of a protein. The last parameter involves the use of a new method for pseudobinding free-energy estimation to specifically probe the importance of residue side-chains to the stability of protein fold. A decision tree was designed to optimally combine these three parameters. The result was compared to the functional data stored in the International Agency for Research on Cancer (IARC) TP53 mutation database. The final prediction achieved a prediction accuracy of 70% and a Matthews correlation coefficient of 0.45. It also showed a high specificity of 91.8%. Mutations in the 85 correctly identified important residues represented 81.7% of the total mutations recorded in the database. In addition, the method was able to correctly assign a probable functional or structural role to the residues. Such information could be critical for the interpretation and prediction of the effect of missense mutations, as it not only provided the fundamental explanation of the observed effect, but also helped design the most appropriate laboratory experiment to verify the prediction results.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background: The combined serum creatinine (SCreat) and cystatin C (CysC) CKD-EPI formula constitutes a new advance for glomerular filtration rate (GFR) estimation in adults. Using inulin clearances (iGFRs), the revised SCreat and the combined Schwartz formulas, this study aims to evaluate the applicability of the combined CKD-EPI formula in children. Method: 201 iGFRs for 201 children were analyzed and divided by chronic kidney disease (CKD) stages (iGFRs ≥90 ml/min/1.73 m(2), 90 > iGFRs > 60, and iGFRs ≤59), and by age groups (<10, 10-15, and >15 years). Medians with 95% confidence intervals of bias, precision, and accuracies within 30% of the iGFRs, for all three formulas, were compared using the Wilcoxon signed-rank test. Results: For the entire cohort and for all CKD and age groups, medians of bias for the CKD-EPI formula were significantly higher (p < 0.001) and precision was significantly lower than the solely SCreat and the combined SCreat and CysC Schwartz formulas. We also found that using the CKD-EPI formula, bias decreased and accuracy increased while the child age group increased, with a better formula performance above 15 years of age. However, the CKD-EPI formula accuracy is 58% compared to 93 and 92% for the SCreat and combined Schwartz formulas in this adolescent group. Conclusions: The performance of the combined CKD-EPI formula improves in adolescence compared with younger ages. Nevertheless, the CKD-EPI formula performs more poorly than the SCreat and the combined Schwartz formula in pediatric population. © 2013 S. Karger AG, Basel.