83 resultados para Quasi-analytical algorithms
Resumo:
The van Genuchten expressions for the unsaturated soil hydraulic properties, first published in 1980, are used frequently in various vadose zone flow and transport applications assuming a specific relationship between the m and n soil hydraulic parameters. By comparison, probably because of the complexity of the hydraulic conductivity equations, the more general solutions with independent m and n values are rarely used. We expressed the general van Genuchten-Mualem and van Genuchten-Burdine hydraulic conductivity equations in terms of hypergeometric functions, which can be approximated by infinite series that converge rapidly for relatively large values of the van Genuchten-Mualem parameter n but only very slowly when n is close to one. Alternative equations were derived that provide very close approximations of the analytical results. The newly proposed equations allow the use of independent values of the parameters m and n in the soil water retention model of van Genuchten for subsequent prediction of the van Genuchten-Mualem and van Genuchten-Burdine hydraulic conductivity models, thus providing more flexibility in fitting experimental pressure-head-dependent water content, theta(h), and hydraulic conductivity, K(h), or K(theta) data.
Resumo:
When building genetic maps, it is necessary to choose from several marker ordering algorithms and criteria, and the choice is not always simple. In this study, we evaluate the efficiency of algorithms try (TRY), seriation (SER), rapid chain delineation (RCD), recombination counting and ordering (RECORD) and unidirectional growth (UG), as well as the criteria PARF (product of adjacent recombination fractions), SARF (sum of adjacent recombination fractions), SALOD (sum of adjacent LOD scores) and LHMC (likelihood through hidden Markov chains), used with the RIPPLE algorithm for error verification, in the construction of genetic linkage maps. A linkage map of a hypothetical diploid and monoecious plant species was simulated containing one linkage group and 21 markers with fixed distance of 3 cM between them. In all, 700 F(2) populations were randomly simulated with and 400 individuals with different combinations of dominant and co-dominant markers, as well as 10 and 20% of missing data. The simulations showed that, in the presence of co-dominant markers only, any combination of algorithm and criteria may be used, even for a reduced population size. In the case of a smaller proportion of dominant markers, any of the algorithms and criteria (except SALOD) investigated may be used. In the presence of high proportions of dominant markers and smaller samples (around 100), the probability of repulsion linkage increases between them and, in this case, use of the algorithms TRY and SER associated to RIPPLE with criterion LHMC would provide better results. Heredity (2009) 103, 494-502; doi:10.1038/hdy.2009.96; published online 29 July 2009
Resumo:
Pancuronium bromide is used with general anesthesia in surgery for muscle relaxation and as an aid to intubation. A high performance liquid chromatographic method was fully validated for the quantitative determination of pancuronium bromide in pharmaceutical injectable solutions. The analytical method was performed on an amino column (Luna 150mm4.6mm, 5m). The mobile phase was composed of acetonitrile:water containing 50mmol L-1 of 1-octane sulfonic acid sodium salt (20:80v/v) with a flow rate of 1.0mL min-1 and ultraviolet (UV) detection at 210nm. The proposed analytical method was compared with that described in the British Pharmacopoeia.
Resumo:
Vecuronium bromide is a neuromuscular blocking agent used for anesthesia to induce skeletal muscle relaxation. HPLC and CZE analytical methods were developed and validated for the quantitative determination of vecuronium bromide. The HPLC method was achieved on an amino column (Luna 150 x 4.6 mm, 5 mu m) using UV detection at 205 nm. The mobile phase was composed of acetonitrile:water containing 25.0 mmol L(-1) of sodium phosphate monobasic (50:50 v/v), pH 4.6 and flow rate of 1.0 mL min(-1). The CZE method was achieved on an uncoated fused-silica capillary (40.0 cm total length, 31.5 cm effective length and 50 mu m i.d.) using indirect UV detection at 230 nm. The electrolyte comprised 1.0 mmol L(-1) of quinine sulfate dihydrate at pH 3.3 and 8.0% of acetonitrile. The results were used to compare both techniques. No significant differences were observed (p > 0.05).
Resumo:
In the protein folding problem, solvent-mediated forces are commonly represented by intra-chain pairwise contact energy. Although this approximation has proven to be useful in several circumstances, it is limited in some other aspects of the problem. Here we show that it is possible to achieve two models to represent the chain-solvent system. one of them with implicit and other with explicit solvent, such that both reproduce the same thermodynamic results. Firstly, lattice models treated by analytical methods, were used to show that the implicit and explicitly representation of solvent effects can be energetically equivalent only if local solvent properties are time and spatially invariant. Following, applying the same reasoning Used for the lattice models, two inter-consistent Monte Carlo off-lattice models for implicit and explicit solvent are constructed, being that now in the latter the solvent properties are allowed to fluctuate. Then, it is shown that the chain configurational evolution as well as the globule equilibrium conformation are significantly distinct for implicit and explicit solvent systems. Actually, strongly contrasting with the implicit solvent version, the explicit solvent model predicts: (i) a malleable globule, in agreement with the estimated large protein-volume fluctuations; (ii) thermal conformational stability, resembling the conformational hear resistance of globular proteins, in which radii of gyration are practically insensitive to thermal effects over a relatively wide range of temperatures; and (iii) smaller radii of gyration at higher temperatures, indicating that the chain conformational entropy in the unfolded state is significantly smaller than that estimated from random coil configurations. Finally, we comment on the meaning of these results with respect to the understanding of the folding process. (C) 2009 Elsevier B.V. All rights reserved.
Resumo:
Introduction - Baccharis dracunculifolia, which has great potential for the development of new phytotherapeutic medicines, is the most important botanical source of the southeastern Brazilian propolis, known as green propolis on account of its color. Objective - To develop a reliable reverse-phase HPLC chromatographic method for the analysis of phenolic compounds in both B. dracunculifolia raw material and its hydroalcoholic extracts. Methodology - The method utilised a C(18) CLC-ODS (M) (4.6 x 250 mm) column with nonlinear gradient elution and UV detection at 280 nm. A procedure for the extraction of phenolic compounds using aqueous ethanol 90%, with the addition of veratraldehyde as the internal standard, was developed allowing the quantification of 10 compounds: caffeic acid, coumaric acid, ferulic acid, cinnamic acid, aromadendrin-4`-methyl ether, isosakuranetin, drupanin, artepillin C, baccharin and 2,2-dimethyl-6-carboxyethenyl-2H-1-benzopyran acid. Results - The developed method gave a good detection response with linearity in the range 20.83-800 mu g/mL and recovery in the range 81.25-93.20%, allowing the quantification of the analysed standards. Conclusion - The method presented good results for the following parameters: selectivity, linearity, accuracy, precision, robustness, as well as limit of detection and limit of quantitation. Therefore, this method could be considered as an analytical tool for the quality control of B. dracunculifolia raw material and its products in both cosmetic and pharmaceutical companies. Copyright (C) 2008 John Wiley & Sons, Ltd.
Resumo:
This paper proposes the use of the q-Gaussian mutation with self-adaptation of the shape of the mutation distribution in evolutionary algorithms. The shape of the q-Gaussian mutation distribution is controlled by a real parameter q. In the proposed method, the real parameter q of the q-Gaussian mutation is encoded in the chromosome of individuals and hence is allowed to evolve during the evolutionary process. In order to test the new mutation operator, evolution strategy and evolutionary programming algorithms with self-adapted q-Gaussian mutation generated from anisotropic and isotropic distributions are presented. The theoretical analysis of the q-Gaussian mutation is also provided. In the experimental study, the q-Gaussian mutation is compared to Gaussian and Cauchy mutations in the optimization of a set of test functions. Experimental results show the efficiency of the proposed method of self-adapting the mutation distribution in evolutionary algorithms.
Resumo:
Objective: The study we assessed how often patients who are manifesting a myocardial infarction (MI) would not be considered candidates for intensive lipid-lowering therapy based on the current guidelines. Methods: In 355 consecutive patients manifesting ST elevation MI (STEMI), admission plasma C-reactive protein (CRP) was measured and Framingham risk score (FRS), PROCAM risk score, Reynolds risk score, ASSIGN risk score, QRISK, and SCORE algorithms were applied. Cardiac computed tomography and carotid ultrasound were performed to assess the coronary artery calcium score (CAC), carotid intima-media thickness (cIMT) and the presence of carotid plaques. Results: Less than 50% of STEMI patients would be identified as having high risk before the event by any of these algorithms. With the exception of FRS (9%), all other algorithms would assign low risk to about half of the enrolled patients. Plasma CRP was <1.0 mg/L in 70% and >2 mg/L in 14% of the patients. The average cIMT was 0.8 +/- 0.2 mm and only in 24% of patients was >= 1.0 mm. Carotid plaques were found in 74% of patients. CAC > 100 was found in 66% of patients. Adding CAC >100 plus the presence of carotid plaque, a high-risk condition would be identified in 100% of the patients using any of the above mentioned algorithms. Conclusion: More than half of patients manifesting STEMI would not be considered as candidates for intensive preventive therapy by the current clinical algorithms. The addition of anatomical parameters such as CAC and the presence of carotid plaques can substantially reduce the CVD risk underestimation. (C) 2010 Elsevier Ireland Ltd. All rights reserved.
Resumo:
We compared the lignin contents of tropical forages by different analytical methods and evaluated their correlations with parameters related to the degradation of neutral detergent fiber (NDF). The lignin content was evaluated by five methods: cellulose solubilization in sulfuric acid [Lignin (sa)], oxidation with potassium permanganate [Lignin (pm)], the Klason lignin method (KL), solubilization in acetyl bromide from acid detergent fiber (ABLadf) and solubilization in acetyl bromide from the cell wall (ABLcw). Samples from ten grasses and ten legumes were used. The lignin content values obtained by gravimetric methods were also corrected for protein contamination, and the corrected values were referred to as Lignin (sa)p, Lignin (pm)p and KLp. The indigestible fraction of NDF (iNDF), the discrete lag (LAG) and the fractional rate of degradation (kd) of NDF were estimated using an in vitro assay. Correcting for protein resulted in reductions (P < 0.05) in the lignin contents as measured by the Lignin (sa), Lignin (pm) and, especially, the KL methods. There was an interaction (P < 0.05) of analytical method and forage group for lignin content. In general, LKp method provided the higher (P < 0.05) lignin contents. The estimates of lignin content obtained by the Lignin (sa)p, Lignin (pm)p and LKp methods were associated (P > 0.05) with all of the NDF degradation parameters. However, the strongest correlation coefficients for all methods evaluated were obtained with Lignin (pm)p and KLp. The lignin content estimated by the ABLcw method did not correlate (P > 0.05) with any parameters of NDF degradation. There was a correlation (P < 0.05) between the lignin content estimated by the ABLadf method and iNDF content. Nonetheless, this correlation was weaker than those found with gravimetric methods. From these results, we concluded that the gravimetric methods produce residues that are contaminated by nitrogenous compounds. Adjustment for these contaminants is suggested, particularly for the KL method, to express lignin content with greater accuracy. The relationships between lignin content measurements and NDF degradation parameters can be better determined using KLp and Lignin (pm)p methods. (C) 2011 Elsevier B.V. All rights reserved.
Resumo:
The evolution of the mass of a black hole embedded in a universe filled with dark energy and cold dark matter is calculated in a closed form within a test fluid model in a Schwarzschild metric, taking into account the cosmological evolution of both fluids. The result describes exactly how accretion asymptotically switches from the matter-dominated to the Lambda-dominated regime. For early epochs, the black hole mass increases due to dark matter accretion, and on later epochs the increase in mass stops as dark energy accretion takes over. Thus, the unphysical behaviour of previous analyses is improved in this simple exact model. (C) 2010 Elsevier B.V. All rights reserved.
Resumo:
We study the stability regions and families of periodic orbits of two planets locked in a co-orbital configuration. We consider different ratios of planetary masses and orbital eccentricities; we also assume that both planets share the same orbital plane. Initially, we perform numerical simulations over a grid of osculating initial conditions to map the regions of stable/chaotic motion and identify equilibrium solutions. These results are later analysed in more detail using a semi-analytical model. Apart from the well-known quasi-satellite orbits and the classical equilibrium Lagrangian points L(4) and L(5), we also find a new regime of asymmetric periodic solutions. For low eccentricities these are located at (delta lambda, delta pi) = (+/- 60 degrees, -/+ 120 degrees), where delta lambda is the difference in mean longitudes and delta pi is the difference in longitudes of pericentre. The position of these anti-Lagrangian solutions changes with the mass ratio and the orbital eccentricities and are found for eccentricities as high as similar to 0.7. Finally, we also applied a slow mass variation to one of the planets and analysed its effect on an initially asymmetric periodic orbit. We found that the resonant solution is preserved as long as the mass variation is adiabatic, with practically no change in the equilibrium values of the angles.
Resumo:
There is an increasing interest in the application of Evolutionary Algorithms (EAs) to induce classification rules. This hybrid approach can benefit areas where classical methods for rule induction have not been very successful. One example is the induction of classification rules in imbalanced domains. Imbalanced data occur when one or more classes heavily outnumber other classes. Frequently, classical machine learning (ML) classifiers are not able to learn in the presence of imbalanced data sets, inducing classification models that always predict the most numerous classes. In this work, we propose a novel hybrid approach to deal with this problem. We create several balanced data sets with all minority class cases and a random sample of majority class cases. These balanced data sets are fed to classical ML systems that produce rule sets. The rule sets are combined creating a pool of rules and an EA is used to build a classifier from this pool of rules. This hybrid approach has some advantages over undersampling, since it reduces the amount of discarded information, and some advantages over oversampling, since it avoids overfitting. The proposed approach was experimentally analysed and the experimental results show an improvement in the classification performance measured as the area under the receiver operating characteristics (ROC) curve.
Resumo:
J.A. Ferreira Neto, E.C. Santos Junior, U. Fra Paleo, D. Miranda Barros, and M.C.O. Moreira. 2011. Optimal subdivision of land in agrarian reform projects: an analysis using genetic algorithms. Cien. Inv. Agr. 38(2): 169-178. The objective of this manuscript is to develop a new procedure to achieve optimal land subdivision using genetic algorithms (GA). The genetic algorithm was tested in the rural settlement of Veredas, located in Minas Gerais, Brazil. This implementation was based on the land aptitude and its productivity index. The sequence of tests in the study was carried out in two areas with eight different agricultural aptitude classes, including one area of 391.88 ha subdivided into 12 lots and another of 404.1763 ha subdivided into 14 lots. The effectiveness of the method was measured using the shunting line standard value of a parceled area lot`s productivity index. To evaluate each parameter, a sequence of 15 calculations was performed to record the best individual fitness average (MMI) found for each parameter variation. The best parameter combination found in testing and used to generate the new parceling with the GA was the following: 320 as the generation number, a population of 40 individuals, 0.8 mutation tax, and a 0.3 renewal tax. The solution generated rather homogeneous lots in terms of productive capacity.
Resumo:
Low-density lipoprotein (LDL) is known as `bad` cholesterol. If too much LDL circulates in the blood it can be retained in the walls of the arteries, causing atherosclerosis. In this paper we showed an alternative method to quantify LDL using the europium tetracycline (EuTc) indicator. The optical properties of the EuTc complex were investigated in aqueous solutions containing LDL. An enhancement was observed of the europium luminescence in the solutions with LDL compared those without the lipoprotein. A method to quantify the amount of LDL in a sample, based on EuTc enhanced luminescence, is proposed. The enhancement mechanism is also discussed. Copyright (C) 2009 John Wiley & Sons, Ltd.
Resumo:
We describe the canonical and microcanonical Monte Carlo algorithms for different systems that can be described by spin models. Sites of the lattice, chosen at random, interchange their spin values, provided they are different. The canonical ensemble is generated by performing exchanges according to the Metropolis prescription whereas in the microcanonical ensemble, exchanges are performed as long as the total energy remains constant. A systematic finite size analysis of intensive quantities and a comparison with results obtained from distinct ensembles are performed and the quality of results reveal that the present approach may be an useful tool for the study of phase transitions, specially first-order transitions. (C) 2009 Elsevier B.V. All rights reserved.