8 resultados para Dissolution rate (DR)

em BORIS: Bern Open Repository and Information System - Berna - Suiça


Relevância:

90.00% 90.00%

Publicador:

Resumo:

OBJECTIVE: To determine the formation and dissolution of calcium fluoride on the enamel surface after application of two fluoride gel-saliva mixtures. METHOD AND MATERIALS: From each of 80 bovine incisors, two enamel specimens were prepared and subjected to two different treatment procedures. In group 1, 80 specimens were treated with a mixture of an amine fluoride gel (1.25% F-; pH 5.2; 5 minutes) and human saliva. In group 2, 80 enamel blocks were subjected to a mixture of sodium fluoride gel (1.25% F; pH 5.5; 5 minutes) and human saliva. Subsequent to fluoride treatment, 40 specimens from each group were stored in human saliva and sterile water, respectively. Ten specimens were removed after each of 1 hour, 24 hours, 2 days, and 5 days and analyzed according to potassium hydroxide-soluble fluoride. RESULTS: Application of amine fluoride gel resulted in a higher amount of potassium hydroxide-soluble fluoride than did sodium fluoride gel 1 hour after application. Saliva exerted an inhibitory effect according to the dissolution rate of calcium fluoride. However, after 5 days, more than 90% of the precipitated calcium fluoride was dissolved in the amine fluoride group, and almost all potassium hydroxide-soluble fluoride was lost in the sodium fluoride group. Calcium fluoride apparently dissolves rapidly, even at almost neutral pH. CONCLUSION: Considering the limitations of an in vitro study, it is concluded that highly concentrated fluoride gels should be applied at an adequate frequency to reestablish a calcium fluoride-like layer.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The aims of this study were to determine the effects of pH and acid concentration on the dissolution of enamel, dentine, and compressed hydroxyapatite (HA) in citric acid solutions (15.6 and 52.1 mmol l(-1) ; pH 2.45, 3.2, and 3.9), using a pH-stat system. After an initial adjustment period, the dissolution rates of enamel and HA were constant, while that of dentine decreased with time. The dissolution rate increased as the pH decreased, and this was most marked for enamel. To compare substrates, the rate of mineral dissolution was normalized to the area occupied by mineral at the specimen surface. For a given acid concentration, the normalized dissolution rate of HA was always less than that for either dentine or enamel. The dissolution rate for dentine mineral was similar to that for enamel at pH 2.45 and greater at pH 3.2 and pH 3.9. The concentration of acid significantly affected the enamel dissolution rate at pH 2.45 and pH 3.2, but not at pH 3.9, and did not significantly affect the dissolution rates of dentine or HA at any pH. The variation in response of the dissolution rate to acid concentration/buffer capacity with respect to pH and tissue type might complicate attempts to predict erosive potential from solution composition.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

To quantify the relationships between buffering properties and acid erosion and hence improve models of erosive potential of acidic drinks, a pH-stat was used to measure the rate of enamel dissolution in solutions of citric, malic and lactic acids, with pH 2.4-3.6 and with acid concentrations adjusted to give buffer capacities (β) of 2-40 (mmol·l(-1))·pH(-1) for each pH. The corresponding undissociated acid concentrations, [HA], and titratable acidity to pH 5.5 (TA5.5) were calculated. In relation to β, the dissolution rate and the strength of response to β varied with acid type (lactic > malic ≥ citric) and decreased as pH increased. The patterns of variation of the dissolution rate with TA5.5 were qualitatively similar to those for β, except that increasing pH above 2.8 had less effect on dissolution in citric and malic acids and none on dissolution in lactic acid. Variations of the dissolution rate with [HA] showed no systematic dependence on acid type but some dependence on pH. The results suggest that [HA], rather than buffering per se, is a major rate-controlling factor, probably owing to the importance of undissociated acid as a readily diffusible source of H(+) ions in maintaining near-surface dissolution within the softened layer of enamel. TA5.5 was more closely correlated with [HA] than was β, and seems to be the preferred practical measure of buffering. The relationship between [HA] and TA5.5 differs between mono- and polybasic acids, so a separate analysis of products according to predominant acid type could improve multivariate models of erosive potential.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The aims were to investigate the effect of monoalkyl phosphates (MAPs) and fluoride on dissolution rate of native and saliva-coated hydroxyapatite (HA). Fluoride at 300 mg/l (as NaF) inhibited dissolution of native HA by 12%, while potassium and sodium dodecyl phosphates (PDP, SDP), at 0.1% or higher, inhibited dissolution by 26-34%. MAPs, but not fluoride, also showed persistence of action. MAPs at 0.5% and fluoride at 300 mg/l were then tested separately against HA pre-treated with human saliva for 2 or 18 h. Agents were applied with brushing to half the specimens, and without brushing to the other half. In control (water-treated) specimens, pre-treatment of HA with human saliva reduced dissolution rate on average by 41% (2 h) and 63% (18 h). Brushing did not have a statistically significant effect on dissolution rate of saliva-coated specimens. In brushed specimens, fluoride significantly increased the inhibition due to 2- or 18-hour saliva pre-treatment. It is hypothesised that brushing partially removes the salivary film and allows KOH-soluble calcium fluoride formation at the surfaces of HA particles. Inhibition was reduced by PDP in 2-hour/non-brushed specimens and in 18-hour/brushed specimens. PDP did not affect dissolution rates in the remaining groups and SDP did not affect dissolution rate in any group. Possible reasons for these variable results are discussed. The experiments show that pre-treatment with saliva can significantly modify results of tests on potential anti-erosive agents and it is recommended that saliva pre-treatment should be a routine part of testing such agents.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The intensive use of nano-sized titanium dioxide (TiO2) particles in many different applications necessitates studies on their risk assessment as there are still open questions on their safe handling and utilization. For reliable risk assessment, the interaction of TiO2 nanoparticles (NP) with biological systems ideally needs to be investigated using physico-chemically uniform and well-characterized NP. In this article, we describe the reproducible production of TiO2 NP aerosols using spark ignition technology. Because currently no data are available on inhaled NP in the 10–50 nm diameter range, the emphasis was to generate NP as small as 20 nm for inhalation studies in rodents. For anticipated in vivo dosimetry analyses, TiO2 NP were radiolabeled with 48V by proton irradiation of the titanium electrodes of the spark generator. The dissolution rate of the 48V label was about 1% within the first day. The highly concentrated, polydisperse TiO2 NP aerosol (3–6 × 106 cm−3) proved to be constant over several hours in terms of its count median mobility diameter, its geometric standard deviation, and number concentration. Extensive characterization of NP chemical composition, physical structure, morphology, and specific surface area was performed. The originally generated amorphous TiO2 NP were converted into crystalline anatase TiO2 NP by thermal annealing at 950 °C. Both crystalline and amorphous 20-nm TiO2 NP were chain agglomerated/aggregated, consisting of primary particles in the range of 5 nm. Disintegration of the deposited TiO2 NP in lung tissue was not detectable within 24 h.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The intensive use of nano-sized particles in many different applications necessitates studies on their risk assessment as there are still open questions on their safe handling and utilization. For reliable risk assessment, the interaction of nanoparticles (NP) with biological systems after various routes of exposure needs to be investigated using well-characterized NP. We report here on the generation of gold-NP (Au-NP) aerosols for inhalation studies with the spark ignition technique, and their characterization in terms of chemical composition, physical structure, morphology, and specific surface area, and on interaction with lung tissues and lung cells after 1 h inhalation by mice. The originally generated agglomerated Au-NP were converted into compact spherical Au-NP by thermal annealing at 600 °C, providing particles of similar mass, but different size and specific surface area. Since there are currently no translocation data available on inhaled Au-NP in the 10–50 nm diameter range, the emphasis was to generate NP as small as 20 nm for inhalation in rodents. For anticipated in vivo systemic translocation and dosimetry analyses, radiolabeled Au-NP were created by proton irradiating the gold electrodes of the spark generator, thus forming gamma ray emitting 195Au with 186 days half-life, allowing long-term biokinetic studies. The dissolution rate of 195Au from the NP was below detection limits. The highly concentrated, polydisperse Au-NP aerosol (1–2 × 107 NP/cm3) proved to be constant over several hours in terms of its count median mobility diameter, its geometric standard deviation and number concentration. After collection on filters particles can be re-suspended and used for instillation or ingestion studies.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The 39Ar-40Ar technique is often used to date the metamorphic evolution of basement rocks. The present review article examines systematic aspects of the K-Ar decay system in different mineral chronometers frequently found in mono- and polymetamorphic basements (amphibole, biotite, muscovite/phengite, K-feldspar). A key observation is that the measured dissolution rate of silicates in aqueous fluids is many orders of magnitude faster, and has a much lower activation energy, than the rate of Fickian diffusion of Ar. The effects of this inequality are patchy age zonations, very much like those observed in many U-Pb chronometers, unaccompanied by intra-crystalline bell¬shaped Ar loss profiles. Recognizing the importance of the respective rate constants in field situations leads to re-evaluating the ages and the interpretive paradigms in classic examples such as the Central Alpine "Lepontine" amphibolite event and the Western Alpine eclogitic event.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The ATLS program by the American college of surgeons is probably the most important globally active training organization dedicated to improve trauma management. Detection of acute haemorrhagic shock belongs to the key issues in clinical practice and thus also in medical teaching. (In this issue of the journal William Schulz and Ian McConachrie critically review the ATLS shock classification Table 1), which has been criticized after several attempts of validation have failed [1]. The main problem is that distinct ranges of heart rate are related to ranges of uncompensated blood loss and that the heart rate decrease observed in severe haemorrhagic shock is ignored [2]. Table 1. Estimated blood loos based on patient's initial presentation (ATLS Students Course Manual, 9th Edition, American College of Surgeons 2012). Class I Class II Class III Class IV Blood loss ml Up to 750 750–1500 1500–2000 >2000 Blood loss (% blood volume) Up to 15% 15–30% 30–40% >40% Pulse rate (BPM) <100 100–120 120–140 >140 Systolic blood pressure Normal Normal Decreased Decreased Pulse pressure Normal or ↑ Decreased Decreased Decreased Respiratory rate 14–20 20–30 30–40 >35 Urine output (ml/h) >30 20–30 5–15 negligible CNS/mental status Slightly anxious Mildly anxious Anxious, confused Confused, lethargic Initial fluid replacement Crystalloid Crystalloid Crystalloid and blood Crystalloid and blood Table options In a retrospective evaluation of the Trauma Audit and Research Network (TARN) database blood loss was estimated according to the injuries in nearly 165,000 adult trauma patients and each patient was allocated to one of the four ATLS shock classes [3]. Although heart rate increased and systolic blood pressure decreased from class I to class IV, respiratory rate and GCS were similar. The median heart rate in class IV patients was substantially lower than the value of 140 min−1 postulated by ATLS. Moreover deterioration of the different parameters does not necessarily go parallel as suggested in the ATLS shock classification [4] and [5]. In all these studies injury severity score (ISS) and mortality increased with in increasing shock class [3] and with increasing heart rate and decreasing blood pressure [4] and [5]. This supports the general concept that the higher heart rate and the lower blood pressure, the sicker is the patient. A prospective study attempted to validate a shock classification derived from the ATLS shock classes [6]. The authors used a combination of heart rate, blood pressure, clinically estimated blood loss and response to fluid resuscitation to classify trauma patients (Table 2) [6]. In their initial assessment of 715 predominantly blunt trauma patients 78% were classified as normal (Class 0), 14% as Class I, 6% as Class II and only 1% as Class III and Class IV respectively. This corresponds to the results from the previous retrospective studies [4] and [5]. The main endpoint used in the prospective study was therefore presence or absence of significant haemorrhage, defined as chest tube drainage >500 ml, evidence of >500 ml of blood loss in peritoneum, retroperitoneum or pelvic cavity on CT scan or requirement of any blood transfusion >2000 ml of crystalloid. Because of the low prevalence of class II or higher grades statistical evaluation was limited to a comparison between Class 0 and Class I–IV combined. As in the retrospective studies, Lawton did not find a statistical difference of heart rate and blood pressure among the five groups either, although there was a tendency to a higher heart rate in Class II patients. Apparently classification during primary survey did not rely on vital signs but considered the rather soft criterion of “clinical estimation of blood loss” and requirement of fluid substitution. This suggests that allocation of an individual patient to a shock classification was probably more an intuitive decision than an objective calculation the shock classification. Nevertheless it was a significant predictor of ISS [6]. Table 2. Shock grade categories in prospective validation study (Lawton, 2014) [6]. Normal No haemorrhage Class I Mild Class II Moderate Class III Severe Class IV Moribund Vitals Normal Normal HR > 100 with SBP >90 mmHg SBP < 90 mmHg SBP < 90 mmHg or imminent arrest Response to fluid bolus (1000 ml) NA Yes, no further fluid required Yes, no further fluid required Requires repeated fluid boluses Declining SBP despite fluid boluses Estimated blood loss (ml) None Up to 750 750–1500 1500–2000 >2000 Table options What does this mean for clinical practice and medical teaching? All these studies illustrate the difficulty to validate a useful and accepted physiologic general concept of the response of the organism to fluid loss: Decrease of cardiac output, increase of heart rate, decrease of pulse pressure occurring first and hypotension and bradycardia occurring only later. Increasing heart rate, increasing diastolic blood pressure or decreasing systolic blood pressure should make any clinician consider hypovolaemia first, because it is treatable and deterioration of the patient is preventable. This is true for the patient on the ward, the sedated patient in the intensive care unit or the anesthetized patients in the OR. We will therefore continue to teach this typical pattern but will continue to mention the exceptions and pitfalls on a second stage. The shock classification of ATLS is primarily used to illustrate the typical pattern of acute haemorrhagic shock (tachycardia and hypotension) as opposed to the Cushing reflex (bradycardia and hypertension) in severe head injury and intracranial hypertension or to the neurogenic shock in acute tetraplegia or high paraplegia (relative bradycardia and hypotension). Schulz and McConachrie nicely summarize the various confounders and exceptions from the general pattern and explain why in clinical reality patients often do not present with the “typical” pictures of our textbooks [1]. ATLS refers to the pitfalls in the signs of acute haemorrhage as well: Advanced age, athletes, pregnancy, medications and pace makers and explicitly state that individual subjects may not follow the general pattern. Obviously the ATLS shock classification which is the basis for a number of questions in the written test of the ATLS students course and which has been used for decades probably needs modification and cannot be literally applied in clinical practice. The European Trauma Course, another important Trauma training program uses the same parameters to estimate blood loss together with clinical exam and laboratory findings (e.g. base deficit and lactate) but does not use a shock classification related to absolute values. In conclusion the typical physiologic response to haemorrhage as illustrated by the ATLS shock classes remains an important issue in clinical practice and in teaching. The estimation of the severity haemorrhage in the initial assessment trauma patients is (and was never) solely based on vital signs only but includes the pattern of injuries, the requirement of fluid substitution and potential confounders. Vital signs are not obsolete especially in the course of treatment but must be interpreted in view of the clinical context. Conflict of interest None declared. Member of Swiss national ATLS core faculty.