865 resultados para ellipse fitting
Resumo:
The aim of this study is to develop a new simple method for analyzing one-dimensional transcranial magnetic stimulation (TMS) mapping studies in humans. Motor evoked potentials (MEP) were recorded from the abductor pollicis brevis (APB) muscle during stimulation at nine different positions on the scalp along a line passing through the APB hot spot and the vertex. Non-linear curve fitting according to the Levenberg-Marquardt algorithm was performed on the averaged amplitude values obtained at all points to find the best-fitting symmetrical and asymmetrical peak functions. Several peak functions could be fitted to the experimental data. Across all subjects, a symmetric, bell-shaped curve, the complementary error function (erfc) gave the best results. This function is characterized by three parameters giving its amplitude, position, and width. None of the mathematical functions tested with less or more than three parameters fitted better. The amplitude and position parameters of the erfc were highly correlated with the amplitude at the hot spot and with the location of the center of gravity of the TMS curve. In conclusion, non-linear curve fitting is an accurate method for the mathematical characterization of one-dimensional TMS curves. This is the first method that provides information on amplitude, position and width simultaneously.
Resumo:
Estimation of the number of mixture components (k) is an unsolved problem. Available methods for estimation of k include bootstrapping the likelihood ratio test statistics and optimizing a variety of validity functionals such as AIC, BIC/MDL, and ICOMP. We investigate the minimization of distance between fitted mixture model and the true density as a method for estimating k. The distances considered are Kullback-Leibler (KL) and “L sub 2”. We estimate these distances using cross validation. A reliable estimate of k is obtained by voting of B estimates of k corresponding to B cross validation estimates of distance. This estimation methods with KL distance is very similar to Monte Carlo cross validated likelihood methods discussed by Smyth (2000). With focus on univariate normal mixtures, we present simulation studies that compare the cross validated distance method with AIC, BIC/MDL, and ICOMP. We also apply the cross validation estimate of distance approach along with AIC, BIC/MDL and ICOMP approach, to data from an osteoporosis drug trial in order to find groups that differentially respond to treatment.
Resumo:
Investigators interested in whether a disease aggregates in families often collect case-control family data, which consist of disease status and covariate information for families selected via case or control probands. Here, we focus on the use of case-control family data to investigate the relative contributions to the disease of additive genetic effects (A), shared family environment (C), and unique environment (E). To this end, we describe a ACE model for binary family data and then introduce an approach to fitting the model to case-control family data. The structural equation model, which has been described previously, combines a general-family extension of the classic ACE twin model with a (possibly covariate-specific) liability-threshold model for binary outcomes. Our likelihood-based approach to fitting involves conditioning on the proband’s disease status, as well as setting prevalence equal to a pre-specified value that can be estimated from the data themselves if necessary. Simulation experiments suggest that our approach to fitting yields approximately unbiased estimates of the A, C, and E variance components, provided that certain commonly-made assumptions hold. These assumptions include: the usual assumptions for the classic ACE and liability-threshold models; assumptions about shared family environment for relative pairs; and assumptions about the case-control family sampling, including single ascertainment. When our approach is used to fit the ACE model to Austrian case-control family data on depression, the resulting estimate of heritability is very similar to those from previous analyses of twin data.
Resumo:
We are concerned with the estimation of the exterior surface of tube-shaped anatomical structures. This interest is motivated by two distinct scientific goals, one dealing with the distribution of HIV microbicide in the colon and the other with measuring degradation in white-matter tracts in the brain. Our problem is posed as the estimation of the support of a distribution in three dimensions from a sample from that distribution, possibly measured with error. We propose a novel tube-fitting algorithm to construct such estimators. Further, we conduct a simulation study to aid in the choice of a key parameter of the algorithm, and we test our algorithm with validation study tailored to the motivating data sets. Finally, we apply the tube-fitting algorithm to a colon image produced by single photon emission computed tomography (SPECT)and to a white-matter tract image produced using diffusion tensor `imaging (DTI).
Resumo:
Dr. Rossi discusses the common errors that are made when fitting statistical models to data. Focuses on the planning, data analysis, and interpretation phases of a statistical analysis, and highlights the errors that are commonly made by researchers of these phases. The implications of these commonly made errors are discussed along with a discussion of the methods that can be used to prevent these errors from occurring. A prescription for carrying out a correct statistical analysis will be discussed.
Resumo:
We consider the problem of approximating the 3D scan of a real object through an affine combination of examples. Common approaches depend either on the explicit estimation of point-to-point correspondences or on 2-dimensional projections of the target mesh; both present drawbacks. We follow an approach similar to [IF03] by representing the target via an implicit function, whose values at the vertices of the approximation are used to define a robust cost function. The problem is approached in two steps, by approximating first a coarse implicit representation of the whole target, and then finer, local ones; the local approximations are then merged together with a Poisson-based method. We report the results of applying our method on a subset of 3D scans from the Face Recognition Grand Challenge v.1.0.
Resumo:
Localized short-echo-time (1)H-MR spectra of human brain contain contributions of many low-molecular-weight metabolites and baseline contributions of macromolecules. Two approaches to model such spectra are compared and the data acquisition sequence, optimized for reproducibility, is presented. Modeling relies on prior knowledge constraints and linear combination of metabolite spectra. Investigated was what can be gained by basis parameterization, i.e., description of basis spectra as sums of parametric lineshapes. Effects of basis composition and addition of experimentally measured macromolecular baselines were investigated also. Both fitting methods yielded quantitatively similar values, model deviations, error estimates, and reproducibility in the evaluation of 64 spectra of human gray and white matter from 40 subjects. Major advantages of parameterized basis functions are the possibilities to evaluate fitting parameters separately, to treat subgroup spectra as independent moieties, and to incorporate deviations from straightforward metabolite models. It was found that most of the 22 basis metabolites used may provide meaningful data when comparing patient cohorts. In individual spectra, sums of closely related metabolites are often more meaningful. Inclusion of a macromolecular basis component leads to relatively small, but significantly different tissue content for most metabolites. It provides a means to quantitate baseline contributions that may contain crucial clinical information.
Resumo:
The characteristics of the power-line communication (PLC) channel are difficult to model due to the heterogeneity of the networks and the lack of common wiring practices. To obtain the full variability of the PLC channel, random channel generators are of great importance for the design and testing of communication algorithms. In this respect, we propose a random channel generator that is based on the top-down approach. Basically, we describe the multipath propagation and the coupling effects with an analytical model. We introduce the variability into a restricted set of parameters and, finally, we fit the model to a set of measured channels. The proposed model enables a closed-form description of both the mean path-loss profile and the statistical correlation function of the channel frequency response. As an example of application, we apply the procedure to a set of in-home measured channels in the band 2-100 MHz whose statistics are available in the literature. The measured channels are divided into nine classes according to their channel capacity. We provide the parameters for the random generation of channels for all nine classes, and we show that the results are consistent with the experimental ones. Finally, we merge the classes to capture the entire heterogeneity of in-home PLC channels. In detail, we introduce the class occurrence probability, and we present a random channel generator that targets the ensemble of all nine classes. The statistics of the composite set of channels are also studied, and they are compared to the results of experimental measurement campaigns in the literature.
Resumo:
An application of the Finite Element Method (FEM) to the solution of a geometric problem is shown. The problem is related to curve fitting i.e. pass a curve trough a set of given points even if they are irregularly spaced. Situations where cur ves with cusps can be encountered in the practice and therefore smooth interpolatting curves may be unsuitable. In this paper the possibilities of the FEM to deal with this type of problems are shown. A particular example of application to road planning is discussed. In this case the funcional to be minimized should express the unpleasent effects of the road traveller. Some comparative numerical examples are also given.
Resumo:
In nature, several types of landforms have simple shapes: as they evolve they tend to take on an ideal, simple geometric form such as a cone, an ellipsoid or a paraboloid. Volcanic landforms are possibly the best examples of this ?ideal? geometry, since they develop as regular surface features due to the point-like (circular) or fissure-like (linear) manifestation of volcanic activity. In this paper, we present a geomorphometric method of fitting the ?ideal? surface onto the real surface of regular-shaped volcanoes through a number of case studies (Mt. Mayon, Mt. Somma, Mt. Semeru, and Mt. Cameroon). Volcanoes with circular, as well as elliptical, symmetry are addressed. For the best surface fit, we use the minimization library MINUIT which is made freely available by the CERN (European Organization for Nuclear Research). This library enables us to handle all the available surface data (every point of the digital elevation model) in a one-step, half-automated way regardless of the size of the dataset, and to consider simultaneously all the relevant parameters of the selected problem, such as the position of the center of the edifice, apex height, and cone slope, thanks to the highly performing adopted procedure. Fitting the geometric surface, along with calculating the related error, demonstrates the twofold advantage of the method. Firstly, we can determine quantitatively to what extent a given volcanic landform is regular, i.e. how much it follows an expected regular shape. Deviations from the ideal shape due to degradation (e.g. sector collapse and normal erosion) can be used in erosion rate calculations. Secondly, if we have a degraded volcanic landform, whose geometry is not clear, this method of surface fitting reconstructs the original shape with the maximum precision. Obviously, in addition to volcanic landforms, this method is also capable of constraining the shapes of other regular surface features such as aeolian, glacial or periglacial landforms.
Resumo:
Transverse galloping is a type of aeroelastic instability characterized by large amplitude, low frequency, normal to wind oscillations that appear in some elastic two-dimensional bluff bodies when subjected to a fluid flow, provided that the flow velocity exceeds a threshold critical value. Such an oscillatory motion is explained because of the energy transfer from the flow to the two-dimensional bluff body. The 7 amount of energy that can be extracted depends on the cross section of the galloping prism. Assuming that the Glauert-Den Hartog quasistatic criterion for galloping instability is satisfied in a first approximation, the suitability of a given cross section for energy harvesting is evaluated by analyzing the lateral aerodynamic force coefficient, fitting a function with a power series in tan a (a being the angle of attack) to 10 available experimental data. In this paper, a fairly large number of simple prisms (triangle, ellipse, biconvex, and rhombus cross sections, as well 11 as D-shaped bodies) is analyzed for suitability as energy harvesters. The influence of the fitting process in the energy harvesting efficiency evaluation is also demonstrated. The analysis shows that the more promising bodies are those with isosceles or approximate isosceles cross sections.