104 resultados para Anisotropic Analytical Algorithm
Resumo:
Pancuronium bromide is used with general anesthesia in surgery for muscle relaxation and as an aid to intubation. A high performance liquid chromatographic method was fully validated for the quantitative determination of pancuronium bromide in pharmaceutical injectable solutions. The analytical method was performed on an amino column (Luna 150mm4.6mm, 5m). The mobile phase was composed of acetonitrile:water containing 50mmol L-1 of 1-octane sulfonic acid sodium salt (20:80v/v) with a flow rate of 1.0mL min-1 and ultraviolet (UV) detection at 210nm. The proposed analytical method was compared with that described in the British Pharmacopoeia.
Resumo:
Vecuronium bromide is a neuromuscular blocking agent used for anesthesia to induce skeletal muscle relaxation. HPLC and CZE analytical methods were developed and validated for the quantitative determination of vecuronium bromide. The HPLC method was achieved on an amino column (Luna 150 x 4.6 mm, 5 mu m) using UV detection at 205 nm. The mobile phase was composed of acetonitrile:water containing 25.0 mmol L(-1) of sodium phosphate monobasic (50:50 v/v), pH 4.6 and flow rate of 1.0 mL min(-1). The CZE method was achieved on an uncoated fused-silica capillary (40.0 cm total length, 31.5 cm effective length and 50 mu m i.d.) using indirect UV detection at 230 nm. The electrolyte comprised 1.0 mmol L(-1) of quinine sulfate dihydrate at pH 3.3 and 8.0% of acetonitrile. The results were used to compare both techniques. No significant differences were observed (p > 0.05).
Resumo:
Chlorpheniramine maleate (CLOR) enantiomers were quantified by ultraviolet spectroscopy and partial least squares regression. The CLOR enantiomers were prepared as inclusion complexes with beta-cyclodextrin and 1-butanol with mole fractions in the range from 50 to 100%. For the multivariate calibration the outliers were detected and excluded and variable selection was performed by interval partial least squares and a genetic algorithm. Figures of merit showed results for accuracy of 3.63 and 2.83% (S)-CLOR for root mean square errors of calibration and prediction, respectively. The ellipse confidence region included the point for the intercept and the slope of 1 and 0, respectively. Precision and analytical sensitivity were 0.57 and 0.50% (S)-CLOR, respectively. The sensitivity, selectivity, adjustment, and signal-to-noise ratio were also determined. The model was validated by a paired t test with the results obtained by high-performance liquid chromatography proposed by the European pharmacopoeia and circular dichroism spectroscopy. The results showed there was no significant difference between the methods at the 95% confidence level, indicating that the proposed method can be used as an alternative to standard procedures for chiral analysis.
Resumo:
In the protein folding problem, solvent-mediated forces are commonly represented by intra-chain pairwise contact energy. Although this approximation has proven to be useful in several circumstances, it is limited in some other aspects of the problem. Here we show that it is possible to achieve two models to represent the chain-solvent system. one of them with implicit and other with explicit solvent, such that both reproduce the same thermodynamic results. Firstly, lattice models treated by analytical methods, were used to show that the implicit and explicitly representation of solvent effects can be energetically equivalent only if local solvent properties are time and spatially invariant. Following, applying the same reasoning Used for the lattice models, two inter-consistent Monte Carlo off-lattice models for implicit and explicit solvent are constructed, being that now in the latter the solvent properties are allowed to fluctuate. Then, it is shown that the chain configurational evolution as well as the globule equilibrium conformation are significantly distinct for implicit and explicit solvent systems. Actually, strongly contrasting with the implicit solvent version, the explicit solvent model predicts: (i) a malleable globule, in agreement with the estimated large protein-volume fluctuations; (ii) thermal conformational stability, resembling the conformational hear resistance of globular proteins, in which radii of gyration are practically insensitive to thermal effects over a relatively wide range of temperatures; and (iii) smaller radii of gyration at higher temperatures, indicating that the chain conformational entropy in the unfolded state is significantly smaller than that estimated from random coil configurations. Finally, we comment on the meaning of these results with respect to the understanding of the folding process. (C) 2009 Elsevier B.V. All rights reserved.
Resumo:
Introduction - Baccharis dracunculifolia, which has great potential for the development of new phytotherapeutic medicines, is the most important botanical source of the southeastern Brazilian propolis, known as green propolis on account of its color. Objective - To develop a reliable reverse-phase HPLC chromatographic method for the analysis of phenolic compounds in both B. dracunculifolia raw material and its hydroalcoholic extracts. Methodology - The method utilised a C(18) CLC-ODS (M) (4.6 x 250 mm) column with nonlinear gradient elution and UV detection at 280 nm. A procedure for the extraction of phenolic compounds using aqueous ethanol 90%, with the addition of veratraldehyde as the internal standard, was developed allowing the quantification of 10 compounds: caffeic acid, coumaric acid, ferulic acid, cinnamic acid, aromadendrin-4`-methyl ether, isosakuranetin, drupanin, artepillin C, baccharin and 2,2-dimethyl-6-carboxyethenyl-2H-1-benzopyran acid. Results - The developed method gave a good detection response with linearity in the range 20.83-800 mu g/mL and recovery in the range 81.25-93.20%, allowing the quantification of the analysed standards. Conclusion - The method presented good results for the following parameters: selectivity, linearity, accuracy, precision, robustness, as well as limit of detection and limit of quantitation. Therefore, this method could be considered as an analytical tool for the quality control of B. dracunculifolia raw material and its products in both cosmetic and pharmaceutical companies. Copyright (C) 2008 John Wiley & Sons, Ltd.
Resumo:
A graph clustering algorithm constructs groups of closely related parts and machines separately. After they are matched for the least intercell moves, a refining process runs on the initial cell formation to decrease the number of intercell moves. A simple modification of this main approach can deal with some practical constraints, such as the popular constraint of bounding the maximum number of machines in a cell. Our approach makes a big improvement in the computational time. More importantly, improvement is seen in the number of intercell moves when the computational results were compared with best known solutions from the literature. (C) 2009 Elsevier Ltd. All rights reserved.
Resumo:
Background: Although various techniques have been used for breast conservation surgery reconstruction, there are few studies describing a logical approach to reconstruction of these defects. The objectives of this study were to establish a classification system for partial breast defects and to develop a reconstructive algorithm. Methods: The authors reviewed a 7-year experience with 209 immediate breast conservation surgery reconstructions. Mean follow-up was 31 months. Type I defects include tissue resection in smaller breasts (bra size A/B), including type IA, which involves minimal defects that do not cause distortion; type III, which involves moderate defects that cause moderate distortion; and type IC, which involves large defects that cause significant deformities. Type II includes tissue resection in medium-sized breasts with or without ptosis (bra size C), and type III includes tissue resection in large breasts with ptosis (bra size D). Results: Eighteen percent of patients presented type I, where a lateral thoracodorsal flap and a latissimus dorsi flap were performed in 68 percent. Forty-five percent presented type II defects, where bilateral mastopexy was performed in 52 percent. Thirty-seven percent of patients presented type III distortion, where bilateral reduction mammaplasty was performed in 67 percent. Thirty-five percent of patients presented complications, and most were minor. Conclusions: An algorithm based on breast size in relation to tumor location and extension of resection can be followed to determine the best approach to reconstruction. The authors` results have demonstrated that the complications were similar to those in other clinical series. Success depends on patient selection, coordinated planning with the oncologic surgeon, and careful intraoperative management.
Resumo:
We studied the anisotropic aggregation of spherical latex particles dispersed in a lyotropic liquid crystal presenting three nematic phases; calamitic, biaxial, and discotic. We observed that in the nematic calamitic phase aggregates of latex particles are formed, which become larger and anisotropic in the vicinity of the transition to the discotic phase, due to a coalescence process. Such aggregates are weakly anisotropic and up to 50 mu m long and tend to align parallel to the director field. At the transition to the discotic phase, the aggregates dissociated and re-formed when the system was brought back to the calamitic phase. This shows that the aggregation is due to attractive and repulsive forces generated by the particular structure of the nematic phase. The surface-induced positional order was investigated by surface force apparatus experiments with the lyotropic system confined between mica surfaces, revealing the existence of a presmectic wetting layer around the surfaces and oscillating forces of increasing amplitude as the confinement thickness was decreased. We discuss the possible mechanisms responsible for the reversible aggregation of latex particles, and we propose that capillary condensation of the N(C) phase, induced by the confinement between the particles, could reduce or remove the gradient of order parameter, driving the transition of aggregates from solidlike to liquidlike and gaslike.
Resumo:
We compared the lignin contents of tropical forages by different analytical methods and evaluated their correlations with parameters related to the degradation of neutral detergent fiber (NDF). The lignin content was evaluated by five methods: cellulose solubilization in sulfuric acid [Lignin (sa)], oxidation with potassium permanganate [Lignin (pm)], the Klason lignin method (KL), solubilization in acetyl bromide from acid detergent fiber (ABLadf) and solubilization in acetyl bromide from the cell wall (ABLcw). Samples from ten grasses and ten legumes were used. The lignin content values obtained by gravimetric methods were also corrected for protein contamination, and the corrected values were referred to as Lignin (sa)p, Lignin (pm)p and KLp. The indigestible fraction of NDF (iNDF), the discrete lag (LAG) and the fractional rate of degradation (kd) of NDF were estimated using an in vitro assay. Correcting for protein resulted in reductions (P < 0.05) in the lignin contents as measured by the Lignin (sa), Lignin (pm) and, especially, the KL methods. There was an interaction (P < 0.05) of analytical method and forage group for lignin content. In general, LKp method provided the higher (P < 0.05) lignin contents. The estimates of lignin content obtained by the Lignin (sa)p, Lignin (pm)p and LKp methods were associated (P > 0.05) with all of the NDF degradation parameters. However, the strongest correlation coefficients for all methods evaluated were obtained with Lignin (pm)p and KLp. The lignin content estimated by the ABLcw method did not correlate (P > 0.05) with any parameters of NDF degradation. There was a correlation (P < 0.05) between the lignin content estimated by the ABLadf method and iNDF content. Nonetheless, this correlation was weaker than those found with gravimetric methods. From these results, we concluded that the gravimetric methods produce residues that are contaminated by nitrogenous compounds. Adjustment for these contaminants is suggested, particularly for the KL method, to express lignin content with greater accuracy. The relationships between lignin content measurements and NDF degradation parameters can be better determined using KLp and Lignin (pm)p methods. (C) 2011 Elsevier B.V. All rights reserved.
Resumo:
The evolution of the mass of a black hole embedded in a universe filled with dark energy and cold dark matter is calculated in a closed form within a test fluid model in a Schwarzschild metric, taking into account the cosmological evolution of both fluids. The result describes exactly how accretion asymptotically switches from the matter-dominated to the Lambda-dominated regime. For early epochs, the black hole mass increases due to dark matter accretion, and on later epochs the increase in mass stops as dark energy accretion takes over. Thus, the unphysical behaviour of previous analyses is improved in this simple exact model. (C) 2010 Elsevier B.V. All rights reserved.
The SARS algorithm: detrending CoRoT light curves with Sysrem using simultaneous external parameters
Resumo:
Surveys for exoplanetary transits are usually limited not by photon noise but rather by the amount of red noise in their data. In particular, although the CoRoT space-based survey data are being carefully scrutinized, significant new sources of systematic noises are still being discovered. Recently, a magnitude-dependant systematic effect was discovered in the CoRoT data by Mazeh et al. and a phenomenological correction was proposed. Here we tie the observed effect to a particular type of effect, and in the process generalize the popular Sysrem algorithm to include external parameters in a simultaneous solution with the unknown effects. We show that a post-processing scheme based on this algorithm performs well and indeed allows for the detection of new transit-like signals that were not previously detected.
Genetic algorithm inversion of the average 1D crustal structure using local and regional earthquakes
Resumo:
Knowing the best 1D model of the crustal and upper mantle structure is useful not only for routine hypocenter determination, but also for linearized joint inversions of hypocenters and 3D crustal structure, where a good choice of the initial model can be very important. Here, we tested the combination of a simple GA inversion with the widely used HYPO71 program to find the best three-layer model (upper crust, lower crust, and upper mantle) by minimizing the overall P- and S-arrival residuals, using local and regional earthquakes in two areas of the Brazilian shield. Results from the Tocantins Province (Central Brazil) and the southern border of the Sao Francisco craton (SE Brazil) indicated an average crustal thickness of 38 and 43 km, respectively, consistent with previous estimates from receiver functions and seismic refraction lines. The GA + HYPO71 inversion produced correct Vp/Vs ratios (1.73 and 1.71, respectively), as expected from Wadati diagrams. Tests with synthetic data showed that the method is robust for the crustal thickness, Pn velocity, and Vp/Vs ratio when using events with distance up to about 400 km, despite the small number of events available (7 and 22, respectively). The velocities of the upper and lower crusts, however, are less well constrained. Interestingly, in the Tocantins Province, the GA + HYPO71 inversion showed a secondary solution (local minimum) for the average crustal thickness, besides the global minimum solution, which was caused by the existence of two distinct domains in the Central Brazil with very different crustal thicknesses. (C) 2010 Elsevier Ltd. All rights reserved.
Resumo:
A large amount of biological data has been produced in the last years. Important knowledge can be extracted from these data by the use of data analysis techniques. Clustering plays an important role in data analysis, by organizing similar objects from a dataset into meaningful groups. Several clustering algorithms have been proposed in the literature. However, each algorithm has its bias, being more adequate for particular datasets. This paper presents a mathematical formulation to support the creation of consistent clusters for biological data. Moreover. it shows a clustering algorithm to solve this formulation that uses GRASP (Greedy Randomized Adaptive Search Procedure). We compared the proposed algorithm with three known other algorithms. The proposed algorithm presented the best clustering results confirmed statistically. (C) 2009 Elsevier Ltd. All rights reserved.
Resumo:
In this paper we present a genetic algorithm with new components to tackle capacitated lot sizing and scheduling problems with sequence dependent setups that appear in a wide range of industries, from soft drink bottling to food manufacturing. Finding a feasible solution to highly constrained problems is often a very difficult task. Various strategies have been applied to deal with infeasible solutions throughout the search. We propose a new scheme of classifying individuals based on nested domains to determine the solutions according to the level of infeasibility, which in our case represents bands of additional production hours (overtime). Within each band, individuals are just differentiated by their fitness function. As iterations are conducted, the widths of the bands are dynamically adjusted to improve the convergence of the individuals into the feasible domain. The numerical experiments on highly capacitated instances show the effectiveness of this computational tractable approach to guide the search toward the feasible domain. Our approach outperforms other state-of-the-art approaches and commercial solvers. (C) 2009 Elsevier Ltd. All rights reserved.
Resumo:
A numerical algorithm for fully dynamical lubrication problems based on the Elrod-Adams formulation of the Reynolds equation with mass-conserving boundary conditions is described. A simple but effective relaxation scheme is used to update the solution maintaining the complementarity conditions on the variables that represent the pressure and fluid fraction. The equations of motion are discretized in time using Newmark`s scheme, and the dynamical variables are updated within the same relaxation process just mentioned. The good behavior of the proposed algorithm is illustrated in two examples: an oscillatory squeeze flow (for which the exact solution is available) and a dynamically loaded journal bearing. This article is accompanied by the ready-to-compile source code with the implementation of the proposed algorithm. [DOI: 10.1115/1.3142903]