976 resultados para function estimation
Resumo:
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)
Resumo:
Ceramic parts are increasingly replacing metal parts due to their excellent physical, chemical and mechanical properties, however they also make them difficult to manufacture by traditional machining methods. The developments carried out in this work are used to estimate tool wear during the grinding of advanced ceramics. The learning process was fed with data collected from a surface grinding machine with tangential diamond wheel and alumina ceramic test specimens, in three cutting configurations: with depths of cut of 120 mu m, 70 mu m and 20 mu m. The grinding wheel speed was 35m/s and the table speed 2.3m/s. Four neural models were evaluated, namely: Multilayer Perceptron, Radial Basis Function, Generalized Regression Neural Networks and the Adaptive Neuro-Fuzzy Inference System. The models'performance evaluation routines were executed automatically, testing all the possible combinations of inputs, number of neurons, number of layers, and spreading. The computational results reveal that the neural models were highly successful in estimating tool wear, since the errors were lower than 4%.
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
This study provides further developments of the evaluation procedure for J and CTOD in SE(T) fracture specimens based on plastic eta-factors and load separation analysis. Non-linear finite element analyses for plane-strain and 3-D models provide the relationship between plastic work and crack driving forces which define the eta-values. Further analyses based on the load separation method define alternative eta-values for the analyzed specimen configurations. Overall, the present results provide improved estimation equations for J and CTOD as a function of loading condition (pin load vs. clamp ends), crack geometry and strain hardening properties. (C) 2011 Elsevier Ltd. All rights reserved.
Resumo:
The extraction of information about neural activity timing from BOLD signal is a challenging task as the shape of the BOLD curve does not directly reflect the temporal characteristics of electrical activity of neurons. In this work, we introduce the concept of neural processing time (NPT) as a parameter of the biophysical model of the hemodynamic response function (HRF). Through this new concept we aim to infer more accurately the duration of neuronal response from the highly nonlinear BOLD effect. The face validity and applicability of the concept of NPT are evaluated through simulations and analysis of experimental time series. The results of both simulation and application were compared with summary measures of HRF shape. The experiment that was analyzed consisted of a decision-making paradigm with simultaneous emotional distracters. We hypothesize that the NPT in primary sensory areas, like the fusiform gyrus, is approximately the stimulus presentation duration. On the other hand, in areas related to processing of an emotional distracter, the NPT should depend on the experimental condition. As predicted, the NPT in fusiform gyrus is close to the stimulus duration and the NPT in dorsal anterior cingulate gyrus depends on the presence of an emotional distracter. Interestingly, the NPT in right but not left dorsal lateral prefrontal cortex depends on the stimulus emotional content. The summary measures of HRF obtained by a standard approach did not detect the variations observed in the NPT. Hum Brain Mapp, 2012. (C) 2010 Wiley Periodicals, Inc.
Resumo:
Abstract Background Measurement of vital capacity (VC) by spirometry is the most widely used technique for lung function evaluation, however, this form of assessment is costly and further investigation of other reliable methods at lower cost is necessary. Objective: To analyze the correlation between direct vital capacity measured with ventilometer and with incentive inspirometer in patients in pre and post cardiac surgery. Methodology Cross-sectional comparative study with patients undergoing cardiac surgery. Respiratory parameters were evaluated through the measurement of VC performed by ventilometer and inspirometer. To analyze data normality the Kolmogorov-Smirnov test was applied, for correlation the Pearson correlation coefficient was used and for comparison of variables in pre and post operative period Student's t test was adopted. We established a level of ignificance of 5%. Data was presented as an average, standard deviation and relative frequency when needed. The significance level was set at 5%. Results We studied 52 patients undergoing cardiac surgery, 20 patients in preoperative with VC-ventilometer: 32.95 ± 11.4 ml/kg and VC-inspirometer: 28.9 ± 11 ml/Kg, r = 0.7 p < 0.001. In the post operatory, 32 patients were evaluated with VC-ventilometer: 28.27 ± 12.48 ml/kg and VC-inspirometer: 26.98 ± 11 ml/Kg, r = 0.95 p < 0.001. Presenting a very high correlation between the evaluation forms studied. Conclusion There was a high correlation between DVC measures with ventilometer and incentive spirometer in pre and post CABG surgery. Despite this, arises the necessity of further studies to evaluate the repercussion of this method in lowering costs at hospitals.
Resumo:
[EN] The aim of this work is to propose a new method for estimating the backward flow directly from the optical flow. We assume that the optical flow has already been computed and we need to estimate the inverse mapping. This mapping is not bijective due to the presence of occlusions and disocclusions, therefore it is not possible to estimate the inverse function in the whole domain. Values in these regions has to be guessed from the available information. We propose an accurate algorithm to calculate the backward flow uniquely from the optical flow, using a simple relation. Occlusions are filled by selecting the maximum motion and disocclusions are filled with two different strategies: a min-fill strategy, which fills each disoccluded region with the minimum value around the region; and a restricted min-fill approach that selects the minimum value in a close neighborhood. In the experimental results, we show the accuracy of the method and compare the results using these two strategies.
Resumo:
The objective of this work of thesis is the refined estimations of source parameters. To such a purpose we used two different approaches, one in the frequency domain and the other in the time domain. In frequency domain, we analyzed the P- and S-wave displacement spectra to estimate spectral parameters, that is corner frequencies and low frequency spectral amplitudes. We used a parametric modeling approach which is combined with a multi-step, non-linear inversion strategy and includes the correction for attenuation and site effects. The iterative multi-step procedure was applied to about 700 microearthquakes in the moment range 1011-1014 N•m and recorded at the dense, wide-dynamic range, seismic networks operating in Southern Apennines (Italy). The analysis of the source parameters is often complicated when we are not able to model the propagation accurately. In this case the empirical Green function approach is a very useful tool to study the seismic source properties. In fact the Empirical Green Functions (EGFs) consent to represent the contribution of propagation and site effects to signal without using approximate velocity models. An EGF is a recorded three-component set of time-histories of a small earthquake whose source mechanism and propagation path are similar to those of the master event. Thus, in time domain, the deconvolution method of Vallée (2004) was applied to calculate the source time functions (RSTFs) and to accurately estimate source size and rupture velocity. This technique was applied to 1) large event, that is Mw=6.3 2009 L’Aquila mainshock (Central Italy), 2) moderate events, that is cluster of earthquakes of 2009 L’Aquila sequence with moment magnitude ranging between 3 and 5.6, 3) small event, i.e. Mw=2.9 Laviano mainshock (Southern Italy).
Resumo:
The motivating problem concerns the estimation of the growth curve of solitary corals that follow the nonlinear Von Bertalanffy Growth Function (VBGF). The most common parameterization of the VBGF for corals is based on two parameters: the ultimate length L∞ and the growth rate k. One aim was to find a more reliable method for estimating these parameters, which can capture the influence of environmental covariates. The main issue with current methods is that they force the linearization of VBGF and neglect intra-individual variability. The idea was to use the hierarchical nonlinear model which has the appealing features of taking into account the influence of collection sites, possible intra-site measurement correlation and variance heterogeneity, and that can handle the influence of environmental factors and all the reliable information that might influence coral growth. This method was used on two databases of different solitary corals i.e. Balanophyllia europaea and Leptopsammia pruvoti, collected in six different sites in different environmental conditions, which introduced a decisive improvement in the results. Nevertheless, the theory of the energy balance in growth ascertains the linear correlation of the two parameters and the independence of the ultimate length L∞ from the influence of environmental covariates, so a further aim of the thesis was to propose a new parameterization based on the ultimate length and parameter c which explicitly describes the part of growth ascribable to site-specific conditions such as environmental factors. We explored the possibility of estimating these parameters characterizing the VBGF new parameterization via the nonlinear hierarchical model. Again there was a general improvement with respect to traditional methods. The results of the two parameterizations were similar, although a very slight improvement was observed in the new one. This is, nevertheless, more suitable from a theoretical point of view when considering environmental covariates.
Resumo:
Standard methods for the estimation of the postmortem interval (PMI, time since death), based on the cooling of the corpse, are limited to about 48 h after death. As an alternative, noninvasive postmortem observation of alterations of brain metabolites by means of (1)H MRS has been suggested for an estimation of the PMI at room temperature, so far without including the effect of other ambient temperatures. In order to study the temperature effect, localized (1)H MRS was used to follow brain decomposition in a sheep brain model at four different temperatures between 4 and 26°C with repeated measurements up to 2100 h postmortem. The simultaneous determination of 25 different biochemical compounds at each measurement allowed the time courses of concentration changes to be followed. A sudden and almost simultaneous change of the concentrations of seven compounds was observed after a time span that decreased exponentially from 700 h at 4°C to 30 h at 26°C ambient temperature. As this represents, most probably, the onset of highly variable bacterial decomposition, and thus defines the upper limit for a reliable PMI estimation, data were analyzed only up to this start of bacterial decomposition. As 13 compounds showed unequivocal, reproducible concentration changes during this period while eight showed a linear increase with a slope that was unambiguously related to ambient temperature. Therefore, a single analytical function with PMI and temperature as variables can describe the time courses of metabolite concentrations. Using the inverse of this function, metabolite concentrations determined from a single MR spectrum can be used, together with known ambient temperatures, to calculate the PMI of a corpse. It is concluded that the effect of ambient temperature can be reliably included in the PMI determination by (1)H MRS.
Resumo:
A large number of proposals for estimating the bivariate survival function under random censoring has been made. In this paper we discuss nonparametric maximum likelihood estimation and the bivariate Kaplan-Meier estimator of Dabrowska. We show how these estimators are computed, present their intuitive background and compare their practical performance under different levels of dependence and censoring, based on extensive simulation results, which leads to a practical advise.
Resumo:
In biostatistical applications, interest often focuses on the estimation of the distribution of time T between two consecutive events. If the initial event time is observed and the subsequent event time is only known to be larger or smaller than an observed monitoring time, then the data is described by the well known singly-censored current status model, also known as interval censored data, case I. We extend this current status model by allowing the presence of a time-dependent process, which is partly observed and allowing C to depend on T through the observed part of this time-dependent process. Because of the high dimension of the covariate process, no globally efficient estimators exist with a good practical performance at moderate sample sizes. We follow the approach of Robins and Rotnitzky (1992) by modeling the censoring variable, given the time-variable and the covariate-process, i.e., the missingness process, under the restriction that it satisfied coarsening at random. We propose a generalization of the simple current status estimator of the distribution of T and of smooth functionals of the distribution of T, which is based on an estimate of the missingness. In this estimator the covariates enter only through the estimate of the missingness process. Due to the coarsening at random assumption, the estimator has the interesting property that if we estimate the missingness process more nonparametrically, then we improve its efficiency. We show that by local estimation of an optimal model or optimal function of the covariates for the missingness process, the generalized current status estimator for smooth functionals become locally efficient; meaning it is efficient if the right model or covariate is consistently estimated and it is consistent and asymptotically normal in general. Estimation of the optimal model requires estimation of the conditional distribution of T, given the covariates. Any (prior) knowledge of this conditional distribution can be used at this stage without any risk of losing root-n consistency. We also propose locally efficient one step estimators. Finally, we show some simulation results.
Resumo:
Estimation for bivariate right censored data is a problem that has had much study over the past 15 years. In this paper we propose a new class of estimators for the bivariate survival function based on locally efficient estimation. We introduce the locally efficient estimator for bivariate right censored data, present an asymptotic theorem, present the results of simulation studies and perform a brief data analysis illustrating the use of the locally efficient estimator.
Resumo:
This paper considers a wide class of semiparametric problems with a parametric part for some covariate effects and repeated evaluations of a nonparametric function. Special cases in our approach include marginal models for longitudinal/clustered data, conditional logistic regression for matched case-control studies, multivariate measurement error models, generalized linear mixed models with a semiparametric component, and many others. We propose profile-kernel and backfitting estimation methods for these problems, derive their asymptotic distributions, and show that in likelihood problems the methods are semiparametric efficient. While generally not true, with our methods profiling and backfitting are asymptotically equivalent. We also consider pseudolikelihood methods where some nuisance parameters are estimated from a different algorithm. The proposed methods are evaluated using simulation studies and applied to the Kenya hemoglobin data.
Resumo:
Submicroscopic changes in chromosomal DNA copy number dosage are common and have been implicated in many heritable diseases and cancers. Recent high-throughput technologies have a resolution that permits the detection of segmental changes in DNA copy number that span thousands of basepairs across the genome. Genome-wide association studies (GWAS) may simultaneously screen for copy number-phenotype and SNP-phenotype associations as part of the analytic strategy. However, genome-wide array analyses are particularly susceptible to batch effects as the logistics of preparing DNA and processing thousands of arrays often involves multiple laboratories and technicians, or changes over calendar time to the reagents and laboratory equipment. Failure to adjust for batch effects can lead to incorrect inference and requires inefficient post-hoc quality control procedures that exclude regions that are associated with batch. Our work extends previous model-based approaches for copy number estimation by explicitly modeling batch effects and using shrinkage to improve locus-specific estimates of copy number uncertainty. Key features of this approach include the use of diallelic genotype calls from experimental data to estimate batch- and locus-specific parameters of background and signal without the requirement of training data. We illustrate these ideas using a study of bipolar disease and a study of chromosome 21 trisomy. The former has batch effects that dominate much of the observed variation in quantile-normalized intensities, while the latter illustrates the robustness of our approach to datasets where as many as 25% of the samples have altered copy number. Locus-specific estimates of copy number can be plotted on the copy-number scale to investigate mosaicism and guide the choice of appropriate downstream approaches for smoothing the copy number as a function of physical position. The software is open source and implemented in the R package CRLMM available at Bioconductor (http:www.bioconductor.org).