985 resultados para concentration methods
Resumo:
Purpose This study investigated the influence of gestational diabetes mellitus on the kinetic disposition and stereoselective metabolism of labetalol administered intravenously or orally. Methods Thirty hypertensive women during the last trimester of pregnancy were divided into four groups: non-diabetic and diabetic women treated with intravenous or oral labetalol. Results The pharmacokinetics of labetalol was not stereoselective in diabetic or non-diabetic pregnant women receiving the drug intravenously. However, oral administration of labetalol resulted in lower values of the area under the plasma concentration versus time curve (AUC) for the beta-blocker (RR) than for the other enantiomers in both diabetic and non-diabetic women. Gestational diabetes mellitus caused changes in the kinetic disposition of the labetalol stereoisomers when administered orally. The AUC values for the less potent adrenoceptor antagonist (SS) and for the alpha-blocking (SR) isomers were higher in diabetic than in non-diabetic pregnant women. Conclusions The approximately 100% higher AUC values obtained for the (SR) isomer in diabetic pregnant women treated with oral labetalol may be of clinical relevance in terms of the alpha-blocking activity of this isomer.
Resumo:
This study used for the first time LC-MS/MS for the analysis of mitragynine (MIT), a mu-opioid agonist with antinociceptive and antitussive properties, in rat plasma. Mitragynine and the internal standard (amitriptyline) were extracted from plasma with hexane-isoamyl alcohol and resolved on a Lichrospher (R) RP-SelectB column (9.80 and 12.90 min, respectively). The quantification limit was 0.2 ng/mL within a linear range of 0.2-1000 ng/mL The method was applied to quantify mitragynine in plasma samples of rats (n = 8 per sampling time) treated with a single oral dose of 20 mg/kg. The following pharmacokinetic parameters were obtained (mean): maximum plasma concentration: 424 ng/mL; time to reach maximum plasma concentration: 1.26 h; elimination half-life: 3.85 h, apparent total clearance: 6.35 L/h/kg, and apparent volume of distribution: 37.90 L/kg. (C) 2009 Elsevier B.V. All rights reserved.
Resumo:
Objectives: The aim of this study was to evaluate the fluoride intake of 2-6-year-old Brazilian children using a semiquantitative food frequency questionnaire (FFQ) which also estimated fluoride intake from dentifrice. Methods: The FFQ was previously validated through application to 78 2-6-year-old Brazilian children and then administered to 379 children residing in an optimally fluoridated community in Brazil (Bauru, State of Sao Paulo). The FFQ was applied to the parents and used to estimate the food intake of the children. The constituents of the diet were divided into solids, water and other beverages. The fluoride content of the diet items was analyzed with the fluoride electrode. The questionnaire also estimated fluoride intake from dentifrice. Results: The average (+/- SD) fluoride intake from solids, water, other beverages and dentifrice was 0.008 +/- 0.005; 0.011 +/- 0.004; 0.009 +/- 0.014 and 0.036 +/- 0.028 mg F/kg body weight/day, respectively, totalizing 0.064 +/- 0.035 mg F/kg body weight/day. The dentifrice and the diet contributed with 56.3% and 43.7% of the daily fluoride intake, respectively. Among the children evaluated, 31.2% are estimated to have risk to develop dental fluorosis (intake > 0.07 mg F/kg body weight/day). Conclusions: The dentifrice was the main source of fluoride intake by the children evaluated. However, the fluoride concentration in food items also significantly contributed to the daily ingestion by 2-6-year-old children. The questionnaire used seems to be a promising alternative to duplicate diet to estimate the fluoride intake at this age range and may have potential to be used in broad epidemiological surveys.
Resumo:
The supervised pattern recognition methods K-Nearest Neighbors (KNN), stepwise discriminant analysis (SDA), and soft independent modelling of class analogy (SIMCA) were employed in this work with the aim to investigate the relationship between the molecular structure of 27 cannabinoid compounds and their analgesic activity. Previous analyses using two unsupervised pattern recognition methods (PCA-principal component analysis and HCA-hierarchical cluster analysis) were performed and five descriptors were selected as the most relevants for the analgesic activity of the compounds studied: R (3) (charge density on substituent at position C(3)), Q (1) (charge on atom C(1)), A (surface area), log P (logarithm of the partition coefficient) and MR (molecular refractivity). The supervised pattern recognition methods (SDA, KNN, and SIMCA) were employed in order to construct a reliable model that can be able to predict the analgesic activity of new cannabinoid compounds and to validate our previous study. The results obtained using the SDA, KNN, and SIMCA methods agree perfectly with our previous model. Comparing the SDA, KNN, and SIMCA results with the PCA and HCA ones we could notice that all multivariate statistical methods classified the cannabinoid compounds studied in three groups exactly in the same way: active, moderately active, and inactive.
Resumo:
This paper critically assesses several loss allocation methods based on the type of competition each method promotes. This understanding assists in determining which method will promote more efficient network operations when implemented in deregulated electricity industries. The methods addressed in this paper include the pro rata [1], proportional sharing [2], loss formula [3], incremental [4], and a new method proposed by the authors of this paper, which is loop-based [5]. These methods are tested on a modified Nordic 32-bus network, where different case studies of different operating points are investigated. The varying results obtained for each allocation method at different operating points make it possible to distinguish methods that promote unhealthy competition from those that encourage better system operation.
Resumo:
We propose quadrature rules for the approximation of line integrals possessing logarithmic singularities and show their convergence. In some instances a superconvergence rate is demonstrated.
Resumo:
The present study details new turbulence field measurements conducted continuously at high frequency for 50 hours in the upper zone of a small subtropical estuary with semi-diurnal tides. Acoustic Doppler velocimetry was used, and the signal was post-processed thoroughly. The suspended sediment concentration wad further deduced from the acoustic backscatter intensity. The field data set demonstrated some unique flow features of the upstream estuarine zone, including some low-frequency longitudinal oscillations induced by internal and external resonance. A striking feature of the data set is the large fluctuations in all turbulence properties and suspended sediment concentration during the tidal cycle. This feature has been rarely documented.
Resumo:
There are many techniques for electricity market price forecasting. However, most of them are designed for expected price analysis rather than price spike forecasting. An effective method of predicting the occurrence of spikes has not yet been observed in the literature so far. In this paper, a data mining based approach is presented to give a reliable forecast of the occurrence of price spikes. Combined with the spike value prediction techniques developed by the same authors, the proposed approach aims at providing a comprehensive tool for price spike forecasting. In this paper, feature selection techniques are firstly described to identify the attributes relevant to the occurrence of spikes. A simple introduction to the classification techniques is given for completeness. Two algorithms: support vector machine and probability classifier are chosen to be the spike occurrence predictors and are discussed in details. Realistic market data are used to test the proposed model with promising results.
Resumo:
The artificial dissipation effects in some solutions obtained with a Navier-Stokes flow solver are demonstrated. The solvers were used to calculate the flow of an artificially dissipative fluid, which is a fluid having dissipative properties which arise entirely from the solution method itself. This was done by setting the viscosity and heat conduction coefficients in the Navier-Stokes solvers to zero everywhere inside the flow, while at the same time applying the usual no-slip and thermal conducting boundary conditions at solid boundaries. An artificially dissipative flow solution is found where the dissipation depends entirely on the solver itself. If the difference between the solutions obtained with the viscosity and thermal conductivity set to zero and their correct values is small, it is clear that the artificial dissipation is dominating and the solutions are unreliable.
Resumo:
Conferences that deliver interactive sessions designed to enhance physician participation, such as role play, small discussion groups, workshops, hands-on training, problem- or case-based learning and individualised training sessions, are effective for physician education.
Resumo:
An investigation was undertaken to test the effectiveness of two procedures for recording boundaries and plot positions for scientific studies on farms on Leyte Island, the Philippines. The accuracy of a Garmin 76 Global Positioning System (GPS) unit and a compass and chain was checked under the same conditions. Tree canopies interfered with the ability of the satellite signal to reach the GPS and therefore the GPS survey was less accurate than the compass and chain survey. Where a high degree of accuracy is required, a compass and chain survey remains the most effective method of surveying land underneath tree canopies, providing operator error is minimised. For a large number of surveys and thus large amounts of data, a GPS is more appropriate than a compass and chain survey because data are easily up-loaded into a Geographic Information System (GIS). However, under dense canopies where satellite signals cannot reach the GPS, it may be necessary to revert to a compass survey or a combination of both methods.
Resumo:
Despite the increasing prevalence of salinity world-wide, the measurement of exchangeable cation concentrations in saline soils remains problematic. Two soil types (Mollisol and Vertisol) were equilibrated with a range of sodium adsorption ratio (SAR) solutions at various ionic strengths. The concentrations of exchangeable cations were then determined using several different types of methods, and the measured exchangeable cation concentrations compared to reference values. At low ionic strength (low salinity), the concentration of exchangeable cations can be accurately estimated from the total soil extractable cations. In saline soils, however, the presence of soluble salts in the soil solution precludes the use of this method. Leaching of the soil with a pre-wash solution (such as alcohol) was found to effectively remove the soluble salts from the soil, thus allowing the accurate measurement of the effective cation exchange capacity (ECEC). However, the dilution associated with this pre-washing increased the exchangeable Ca concentrations while simultaneously decreasing exchangeable Na. In contrast, when calculated as the difference between the total extractable cations and the soil solution cations, good correlations were found between the calculated exchangeable cation concentrations and the reference values for both Na (Mollisol: y=0.873x and Vertisol: y=0.960x) and Ca (Mollisol: y=0.901x and Vertisol: y=1.05x). Therefore, for soils with a soil solution ionic strength greater than 50 mM (electrical conductivity of 4 dS/m) (in which exchangeable cation concentrations are overestimated by the assumption they can be estimated as the total extractable cations), concentrations can be calculated as the difference between total extractable cations and soluble cations.