940 resultados para linear approximation method
Resumo:
We propose a definition of classical differential cross sections for particles with essentially nonplanar orbits, such as spinning ones. We give also a method for its computation. The calculations are carried out explicitly for electromagnetic, gravitational, and short-range scalar interactions up to the linear terms in the slow-motion approximation. The contribution of the spin-spin terms is found to be at best 10-6 times the post-Newtonian ones for the gravitational interaction.
Resumo:
This paper presents a new method to analyze timeinvariant linear networks allowing the existence of inconsistent initial conditions. This method is based on the use of distributions and state equations. Any time-invariant linear network can be analyzed. The network can involve any kind of pure or controlled sources. Also, the transferences of energy that occur at t=O are determined, and the concept of connection energy is introduced. The algorithms are easily implemented in a computer program.
Resumo:
We propose an iterative procedure to minimize the sum of squares function which avoids the nonlinear nature of estimating the first order moving average parameter and provides a closed form of the estimator. The asymptotic properties of the method are discussed and the consistency of the linear least squares estimator is proved for the invertible case. We perform various Monte Carlo experiments in order to compare the sample properties of the linear least squares estimator with its nonlinear counterpart for the conditional and unconditional cases. Some examples are also discussed
Resumo:
We showed earlier how to predict the writhe of any rational knot or link in its ideal geometric configuration, or equivalently the average of the 3D writhe over statistical ensembles of random configurations of a given knot or link (Cerf and Stasiak 2000 Proc. Natl Acad. Sci. USA 97 3795). There is no general relation between the minimal crossing number of a knot and the writhe of its ideal geometric configuration. However, within individual families of knots linear relations between minimal crossing number and writhe were observed (Katritch et al 1996 Nature 384 142). Here we present a method that allows us to express the writhe as a linear function of the minimal crossing number within Conway families of knots and links in their ideal configuration. The slope of the lines and the shift between any two lines with the same
Resumo:
We present here a nonbiased probabilistic method that allows us to consistently analyze knottedness of linear random walks with up to several hundred noncorrelated steps. The method consists of analyzing the spectrum of knots formed by multiple closures of the same open walk through random points on a sphere enclosing the walk. Knottedness of individual "frozen" configurations of linear chains is therefore defined by a characteristic spectrum of realizable knots. We show that in the great majority of cases this method clearly defines the dominant knot type of a walk, i.e., the strongest component of the spectrum. In such cases, direct end-to-end closure creates a knot that usually coincides with the knot type that dominates the random closure spectrum. Interestingly, in a very small proportion of linear random walks, the knot type is not clearly defined. Such walks can be considered as residing in a border zone of the configuration space of two or more knot types. We also characterize the scaling behavior of linear random knots.
Resumo:
This article describes a method for determining the polydispersity index Ip2=Mz/Mw of the molecular weight distribution (MWD) of linear polymeric materials from linear viscoelastic data. The method uses the Mellin transform of the relaxation modulus of a simple molecular rheological model. One of the main features of this technique is that it enables interesting MWD information to be obtained directly from dynamic shear experiments. It is not necessary to achieve the relaxation spectrum, so the ill-posed problem is avoided. Furthermore, a determinate shape of the continuous MWD does not have to be assumed in order to obtain the polydispersity index. The technique has been developed to deal with entangled linear polymers, whatever the form of the MWD is. The rheological information required to obtain the polydispersity index is the storage G′(ω) and loss G″(ω) moduli, extending from the terminal zone to the plateau region. The method provides a good agreement between the proposed theoretical approach and the experimental polydispersity indices of several linear polymers for a wide range of average molecular weights and polydispersity indices. It is also applicable to binary blends.
Resumo:
The role of busulfan (Bu) metabolites in the adverse events seen during hematopoietic stem cell transplantation and in drug interactions is not explored. Lack of availability of established analytical methods limits our understanding in this area. The present work describes a novel gas chromatography-tandem mass spectrometric assay for the analysis of sulfolane (Su) in plasma of patients receiving high-dose Bu. Su and Bu were extracted from a single 100 μL plasma sample by liquid-liquid extraction. Bu was separately derivatized with 2,3,5,6-tetrafluorothiophenolfluorinated agent. Mass spectrometric detection of the analytes was performed in the selected reaction monitoring mode on a triple quadrupole instrument after electronic impact ionization. Bu and Su were analyzed with separate chromatographic programs, lasting 5 min each. The assay for Su was found to be linear in the concentration range of 20-400 ng/mL. The method has satisfactory sensitivity (lower limit of quantification, 20 ng/mL) and precision (relative standard deviation less than 15 %) for all the concentrations tested with a good trueness (100 ± 5 %). This method was applied to measure Su from pediatric patients with samples collected 4 h after dose 1 (n = 46), before dose 7 (n = 56), and after dose 9 (n = 54) infusions of Bu. Su (mean ± SD) was detectable in plasma of patients 4 h after dose 1, and higher levels were observed after dose 9 (249.9 ± 123.4 ng/mL). This method may be used in clinical studies investigating the role of Su on adverse events and drug interactions associated with Bu therapy.
Resumo:
The multiscale finite-volume (MSFV) method has been derived to efficiently solve large problems with spatially varying coefficients. The fine-scale problem is subdivided into local problems that can be solved separately and are coupled by a global problem. This algorithm, in consequence, shares some characteristics with two-level domain decomposition (DD) methods. However, the MSFV algorithm is different in that it incorporates a flux reconstruction step, which delivers a fine-scale mass conservative flux field without the need for iterating. This is achieved by the use of two overlapping coarse grids. The recently introduced correction function allows for a consistent handling of source terms, which makes the MSFV method a flexible algorithm that is applicable to a wide spectrum of problems. It is demonstrated that the MSFV operator, used to compute an approximate pressure solution, can be equivalently constructed by writing the Schur complement with a tangential approximation of a single-cell overlapping grid and incorporation of appropriate coarse-scale mass-balance equations.
Resumo:
There are many known examples of multiple semi-independent associations at individual loci; such associations might arise either because of true allelic heterogeneity or because of imperfect tagging of an unobserved causal variant. This phenomenon is of great importance in monogenic traits but has not yet been systematically investigated and quantified in complex-trait genome-wide association studies (GWASs). Here, we describe a multi-SNP association method that estimates the effect of loci harboring multiple association signals by using GWAS summary statistics. Applying the method to a large anthropometric GWAS meta-analysis (from the Genetic Investigation of Anthropometric Traits consortium study), we show that for height, body mass index (BMI), and waist-to-hip ratio (WHR), 3%, 2%, and 1%, respectively, of additional phenotypic variance can be explained on top of the previously reported 10% (height), 1.5% (BMI), and 1% (WHR). The method also permitted a substantial increase (by up to 50%) in the number of loci that replicate in a discovery-validation design. Specifically, we identified 74 loci at which the multi-SNP, a linear combination of SNPs, explains significantly more variance than does the best individual SNP. A detailed analysis of multi-SNPs shows that most of the additional variability explained is derived from SNPs that are not in linkage disequilibrium with the lead SNP, suggesting a major contribution of allelic heterogeneity to the missing heritability.
Resumo:
Polynomial constraint solving plays a prominent role in several areas of hardware and software analysis and verification, e.g., termination proving, program invariant generation and hybrid system verification, to name a few. In this paper we propose a new method for solving non-linear constraints based on encoding the problem into an SMT problem considering only linear arithmetic. Unlike other existing methods, our method focuses on proving satisfiability of the constraints rather than on proving unsatisfiability, which is more relevant in several applications as we illustrate with several examples. Nevertheless, we also present new techniques based on the analysis of unsatisfiable cores that allow one to efficiently prove unsatisfiability too for a broad class of problems. The power of our approach is demonstrated by means of extensive experiments comparing our prototype with state-of-the-art tools on benchmarks taken both from the academic and the industrial world.
Resumo:
Biometric system performance can be improved by means of data fusion. Several kinds of information can be fused in order to obtain a more accurate classification (identification or verification) of an input sample. In this paper we present a method for computing the weights in a weighted sum fusion for score combinations, by means of a likelihood model. The maximum likelihood estimation is set as a linear programming problem. The scores are derived from a GMM classifier working on a different feature extractor. Our experimental results assesed the robustness of the system in front a changes on time (different sessions) and robustness in front a change of microphone. The improvements obtained were significantly better (error bars of two standard deviations) than a uniform weighted sum or a uniform weighted product or the best single classifier. The proposed method scales computationaly with the number of scores to be fussioned as the simplex method for linear programming.
Resumo:
This paper proposes a very fast method for blindly approximating a nonlinear mapping which transforms a sum of random variables. The estimation is surprisingly good even when the basic assumption is not satisfied.We use the method for providing a good initialization for inverting post-nonlinear mixtures and Wiener systems. Experiments show that the algorithm speed is strongly improved and the asymptotic performance is preserved with a very low extra computational cost.
Resumo:
This paper deals with non-linear transformations for improving the performance of an entropy-based voice activity detector (VAD). The idea to use a non-linear transformation has already been applied in the field of speech linear prediction, or linear predictive coding (LPC), based on source separation techniques, where a score function is added to classical equations in order to take into account the true distribution of the signal. We explore the possibility of estimating the entropy of frames after calculating its score function, instead of using original frames. We observe that if the signal is clean, the estimated entropy is essentially the same; if the signal is noisy, however, the frames transformed using the score function may give entropy that is different in voiced frames as compared to nonvoiced ones. Experimental evidence is given to show that this fact enables voice activity detection under high noise, where the simple entropy method fails.
Resumo:
Significant progress has been made with regard to the quantitative integration of geophysical and hydrological data at the local scale. However, extending the corresponding approaches to the regional scale represents a major, and as-of-yet largely unresolved, challenge. To address this problem, we have developed an upscaling procedure based on a Bayesian sequential simulation approach. This method is then applied to the stochastic integration of low-resolution, regional-scale electrical resistivity tomography (ERT) data in combination with high-resolution, local-scale downhole measurements of the hydraulic and electrical conductivities. Finally, the overall viability of this upscaling approach is tested and verified by performing and comparing flow and transport simulation through the original and the upscaled hydraulic conductivity fields. Our results indicate that the proposed procedure does indeed allow for obtaining remarkably faithful estimates of the regional-scale hydraulic conductivity structure and correspondingly reliable predictions of the transport characteristics over relatively long distances.
Resumo:
A simple method using liquid chromatography-linear ion trap mass spectrometry for simultaneous determination of testosterone glucuronide (TG), testosterone sulfate (TS), epitestosterone glucuronide (EG) and epitestosterone sulfate (ES) in urine samples was developed. For validation purposes, a urine containing no detectable amount of TG, TS and EG was selected and fortified with steroid conjugate standards. Quantification was performed using deuterated testosterone conjugates to correct for ion suppression/enhancement during ESI. Assay validation was performed in terms of lower limit of detection (1-3ng/mL), recovery (89-101%), intraday precision (2.0-6.8%), interday precision (3.4-9.6%) and accuracy (101-103%). Application of the method to short-term stability testing of urine samples at temperature ranging from 4 to 37 degrees C during a time-storage of a week lead to the conclusion that addition of sodium azide (10mg/mL) is required for preservation of the analytes.