976 resultados para numerical algorithm
Resumo:
Recently simple limiting functions establishing upper and lower bounds on the Mittag-Leffler function were found. This paper follows those expressions to design an efficient algorithm for the approximate calculation of expressions usual in fractional-order control systems. The numerical experiments demonstrate the superior efficiency of the proposed method.
Resumo:
The Electromagnetism-like (EM) algorithm is a population- based stochastic global optimization algorithm that uses an attraction- repulsion mechanism to move sample points towards the optimal. In this paper, an implementation of the EM algorithm in the Matlab en- vironment as a useful function for practitioners and for those who want to experiment a new global optimization solver is proposed. A set of benchmark problems are solved in order to evaluate the performance of the implemented method when compared with other stochastic methods available in the Matlab environment. The results con rm that our imple- mentation is a competitive alternative both in term of numerical results and performance. Finally, a case study based on a parameter estimation problem of a biology system shows that the EM implementation could be applied with promising results in the control optimization area.
Resumo:
"Series: Solid mechanics and its applications, vol. 226"
Resumo:
The implicit projection algorithm of isotropic plasticity is extended to an objective anisotropic elastic perfectly plastic model. The recursion formula developed to project the trial stress on the yield surface, is applicable to any non linear elastic law and any plastic yield function.A curvilinear transverse isotropic model based on a quadratic elastic potential and on Hill's quadratic yield criterion is then developed and implemented in a computer program for bone mechanics perspectives. The paper concludes with a numerical study of a schematic bone-prosthesis system to illustrate the potential of the model.
Resumo:
PECUBE is a three-dimensional thermal-kinematic code capable of solving the heat production-diffusion-advection equation under a temporally varying surface boundary condition. It was initially developed to assess the effects of time-varying surface topography (relief) on low-temperature thermochronological datasets. Thermochronometric ages are predicted by tracking the time-temperature histories of rock-particles ending up at the surface and by combining these with various age-prediction models. In the decade since its inception, the PECUBE code has been under continuous development as its use became wider and addressed different tectonic-geomorphic problems. This paper describes several major recent improvements in the code, including its integration with an inverse-modeling package based on the Neighborhood Algorithm, the incorporation of fault-controlled kinematics, several different ways to address topographic and drainage change through time, the ability to predict subsurface (tunnel or borehole) data, prediction of detrital thermochronology data and a method to compare these with observations, and the coupling with landscape-evolution (or surface-process) models. Each new development is described together with one or several applications, so that the reader and potential user can clearly assess and make use of the capabilities of PECUBE. We end with describing some developments that are currently underway or should take place in the foreseeable future. (C) 2012 Elsevier B.V. All rights reserved.
Resumo:
Given the very large amount of data obtained everyday through population surveys, much of the new research again could use this information instead of collecting new samples. Unfortunately, relevant data are often disseminated into different files obtained through different sampling designs. Data fusion is a set of methods used to combine information from different sources into a single dataset. In this article, we are interested in a specific problem: the fusion of two data files, one of which being quite small. We propose a model-based procedure combining a logistic regression with an Expectation-Maximization algorithm. Results show that despite the lack of data, this procedure can perform better than standard matching procedures.
Resumo:
Gel electrophoresis allows one to separate knotted DNA (nicked circular) of equal length according to the knot type. At low electric fields, complex knots, being more compact, drift faster than simpler knots. Recent experiments have shown that the drift velocity dependence on the knot type is inverted when changing from low to high electric fields. We present a computer simulation on a lattice of a closed, knotted, charged DNA chain drifting in an external electric field in a topologically restricted medium. Using a Monte Carlo algorithm, the dependence of the electrophoretic migration of the DNA molecules on the knot type and on the electric field intensity is investigated. The results are in qualitative and quantitative agreement with electrophoretic experiments done under conditions of low and high electric fields.
Resumo:
Considering that information from soil reflectance spectra is underutilized in soil classification, this paper aimed to evaluate the relationship of soil physical, chemical properties and their spectra, to identify spectral patterns for soil classes, evaluate the use of numerical classification of profiles combined with spectral data for soil classification. We studied 20 soil profiles from the municipality of Piracicaba, State of São Paulo, Brazil, which were morphologically described and classified up to the 3rd category level of the Brazilian Soil Classification System (SiBCS). Subsequently, soil samples were collected from pedogenetic horizons and subjected to soil particle size and chemical analyses. Their Vis-NIR spectra were measured, followed by principal component analysis. Pearson's linear correlation coefficients were determined among the four principal components and the following soil properties: pH, organic matter, P, K, Ca, Mg, Al, CEC, base saturation, and Al saturation. We also carried out interpretation of the first three principal components and their relationships with soil classes defined by SiBCS. In addition, numerical classification of the profiles based on the OSACA algorithm was performed using spectral data as a basis. We determined the Normalized Mutual Information (NMI) and Uncertainty Coefficient (U). These coefficients represent the similarity between the numerical classification and the soil classes from SiBCS. Pearson's correlation coefficients were significant for the principal components when compared to sand, clay, Al content and soil color. Visual analysis of the principal component scores showed differences in the spectral behavior of the soil classes, mainly among Argissolos and the others soils. The NMI and U similarity coefficients showed values of 0.74 and 0.64, respectively, suggesting good similarity between the numerical and SiBCS classes. For example, numerical classification correctly distinguished Argissolos from Latossolos and Nitossolos. However, this mathematical technique was not able to distinguish Latossolos from Nitossolos Vermelho férricos, but the Cambissolos were well differentiated from other soil classes. The numerical technique proved to be effective and applicable to the soil classification process.
Resumo:
Regulatory gene networks contain generic modules, like those involving feedback loops, which are essential for the regulation of many biological functions (Guido et al. in Nature 439:856-860, 2006). We consider a class of self-regulated genes which are the building blocks of many regulatory gene networks, and study the steady-state distribution of the associated Gillespie algorithm by providing efficient numerical algorithms. We also study a regulatory gene network of interest in gene therapy, using mean-field models with time delays. Convergence of the related time-nonhomogeneous Markov chain is established for a class of linear catalytic networks with feedback loops.
Resumo:
BACKGROUND: Surveillance of multiple congenital anomalies is considered to be more sensitive for the detection of new teratogens than surveillance of all or isolated congenital anomalies. Current literature proposes the manual review of all cases for classification into isolated or multiple congenital anomalies. METHODS: Multiple anomalies were defined as two or more major congenital anomalies, excluding sequences and syndromes. A computer algorithm for classification of major congenital anomaly cases in the EUROCAT database according to International Classification of Diseases (ICD)v10 codes was programmed, further developed, and implemented for 1 year's data (2004) from 25 registries. The group of cases classified with potential multiple congenital anomalies were manually reviewed by three geneticists to reach a final agreement of classification as "multiple congenital anomaly" cases. RESULTS: A total of 17,733 cases with major congenital anomalies were reported giving an overall prevalence of major congenital anomalies at 2.17%. The computer algorithm classified 10.5% of all cases as "potentially multiple congenital anomalies". After manual review of these cases, 7% were agreed to have true multiple congenital anomalies. Furthermore, the algorithm classified 15% of all cases as having chromosomal anomalies, 2% as monogenic syndromes, and 76% as isolated congenital anomalies. The proportion of multiple anomalies varies by congenital anomaly subgroup with up to 35% of cases with bilateral renal agenesis. CONCLUSIONS: The implementation of the EUROCAT computer algorithm is a feasible, efficient, and transparent way to improve classification of congenital anomalies for surveillance and research.
Resumo:
In this work a fuzzy linear system is used to solve Leontief input-output model with fuzzy entries. For solving this model, we assume that the consumption matrix from di erent sectors of the economy and demand are known. These assumptions heavily depend on the information obtained from the industries. Hence uncertainties are involved in this information. The aim of this work is to model these uncertainties and to address them by fuzzy entries such as fuzzy numbers and LR-type fuzzy numbers (triangular and trapezoidal). Fuzzy linear system has been developed using fuzzy data and it is solved using Gauss-Seidel algorithm. Numerical examples show the e ciency of this algorithm. The famous example from Prof. Leontief, where he solved the production levels for U.S. economy in 1958, is also further analyzed.
Resumo:
Second-rank tensor interactions, such as quadrupolar interactions between the spin- 1 deuterium nuclei and the electric field gradients created by chemical bonds, are affected by rapid random molecular motions that modulate the orientation of the molecule with respect to the external magnetic field. In biological and model membrane systems, where a distribution of dynamically averaged anisotropies (quadrupolar splittings, chemical shift anisotropies, etc.) is present and where, in addition, various parts of the sample may undergo a partial magnetic alignment, the numerical analysis of the resulting Nuclear Magnetic Resonance (NMR) spectra is a mathematically ill-posed problem. However, numerical methods (de-Pakeing, Tikhonov regularization) exist that allow for a simultaneous determination of both the anisotropy and orientational distributions. An additional complication arises when relaxation is taken into account. This work presents a method of obtaining the orientation dependence of the relaxation rates that can be used for the analysis of the molecular motions on a broad range of time scales. An arbitrary set of exponential decay rates is described by a three-term truncated Legendre polynomial expansion in the orientation dependence, as appropriate for a second-rank tensor interaction, and a linear approximation to the individual decay rates is made. Thus a severe numerical instability caused by the presence of noise in the experimental data is avoided. At the same time, enough flexibility in the inversion algorithm is retained to achieve a meaningful mapping from raw experimental data to a set of intermediate, model-free
Resumo:
Decision trees are very powerful tools for classification in data mining tasks that involves different types of attributes. When coming to handling numeric data sets, usually they are converted first to categorical types and then classified using information gain concepts. Information gain is a very popular and useful concept which tells you, whether any benefit occurs after splitting with a given attribute as far as information content is concerned. But this process is computationally intensive for large data sets. Also popular decision tree algorithms like ID3 cannot handle numeric data sets. This paper proposes statistical variance as an alternative to information gain as well as statistical mean to split attributes in completely numerical data sets. The new algorithm has been proved to be competent with respect to its information gain counterpart C4.5 and competent with many existing decision tree algorithms against the standard UCI benchmarking datasets using the ANOVA test in statistics. The specific advantages of this proposed new algorithm are that it avoids the computational overhead of information gain computation for large data sets with many attributes, as well as it avoids the conversion to categorical data from huge numeric data sets which also is a time consuming task. So as a summary, huge numeric datasets can be directly submitted to this algorithm without any attribute mappings or information gain computations. It also blends the two closely related fields statistics and data mining
Resumo:
The discontinuities in the solutions of systems of conservation laws are widely considered as one of the difficulties in numerical simulation. A numerical method is proposed for solving these partial differential equations with discontinuities in the solution. The method is able to track these sharp discontinuities or interfaces while still fully maintain the conservation property. The motion of the front is obtained by solving a Riemann problem based on the state values at its both sides which are reconstructed by using weighted essentially non oscillatory (WENO) scheme. The propagation of the front is coupled with the evaluation of "dynamic" numerical fluxes. Some numerical tests in 1D and preliminary results in 2D are presented.
Resumo:
A simple numerical model which calculates the kinetics of crystallization involving randomly distributed nucleation and isotropic growth is presented. The model can be applied to different thermal histories and no restrictions are imposed on the time and the temperature dependences of the nucleation and growth rates. We also develop an algorithm which evaluates the corresponding emerging grain-size distribution. The algorithm is easy to implement and particularly flexible, making it possible to simulate several experimental conditions. Its simplicity and minimal computer requirements allow high accuracy for two- and three-dimensional growth simulations. The algorithm is applied to explore the grain morphology development during isothermal treatments for several nucleation regimes. In particular, thermal nucleation, preexisting nuclei, and the combination of both nucleation mechanisms are analyzed. For the first two cases, the universal grain-size distribution is obtained. The high accuracy of the model is stated from its comparison to analytical predictions. Finally, the validity of the Kolmogorov-Johnson-Mehl-Avrami model SSSR, is verified for all the cases studied