870 resultados para Knowledge Representations and Controlled Vocabularies in Water Science
Resumo:
ABSTRACT Tillage systems can influence C sequestration by changing aggregate formation and C distribution within the aggregate. This study was undertaken to explore the impact of no-tillage without straw (NT-S) and with straw (NT+S), and moldboard plow without straw (MP-S) and with straw (MP+S), on soil aggregation and aggregate-associated C after six years of double rice planting in a Hydragric Anthrosol in Guangxi, southwest of China. Soil samples of 0.00-0.05, 0.05-0.20 and 0.20-0.30 m layers were wet-sieved and divided into four aggregate-size classes, >2 mm, 2.00-0.25 mm, 0.25-0.053 and <0.053 mm, respectively, for measuring aggregate associated C and humic and fulvic acids. Results showed that the soil organic carbon (SOC) stock in bulk soil was 40.2-51.1 % higher in the 0.00-0.05 m layer and 11.3-17.0 % lower in the 0.05-0.20 m layer in NT system (NT+S and NT-S) compared to the MP system (MP+S and MP-S), respectively. However, no statistical difference was found across the whole 0.00-0.30 m layer. The NT system increased the proportion of >2 mm aggregate fraction and reduced the proportion of <0.053 mm aggregates in both 0.00-0.05 and 0.05-0.20 m layers. The SOC concentration, SOC stock and humic and fulvic acids within the >0.25 mm macroaggregate fraction also significantly increased in the 0.00-0.5 m layer in NT system. However, those within the 2.00-0.25 mm aggregate fraction were significantly reduced in the 0.05-0.200 m layer under NT system. Straw incorporation increased not only the SOC stock in bulk soil, but also the proportion of macroaggregate, aggregate associated with SOC and humic and fulvic acids concentration within the aggregate. The effect of straw on C sequestration might be dependent on the location of straw incorporation. In conclusion, the NT system increased the total SOC accumulation and humic and fulvic acids within macroaggregates, thus contributing to C sequestration in the 0.00-0.05 m layer.
Resumo:
OBJECTIVE: Before a patient can be connected to a mechanical ventilator, the controls of the apparatus need to be set up appropriately. Today, this is done by the intensive care professional. With the advent of closed loop controlled mechanical ventilation, methods will be needed to select appropriate start up settings automatically. The objective of our study was to test such a computerized method which could eventually be used as a start-up procedure (first 5-10 minutes of ventilation) for closed-loop controlled ventilation. DESIGN: Prospective Study. SETTINGS: ICU's in two adult and one children's hospital. PATIENTS: 25 critically ill adult patients (age > or = 15 y) and 17 critically ill children selected at random were studied. INTERVENTIONS: To stimulate 'initial connection', the patients were disconnected from their ventilator and transiently connected to a modified Hamilton AMADEUS ventilator for maximally one minute. During that time they were ventilated with a fixed and standardized breath pattern (Test Breaths) based on pressure controlled synchronized intermittent mandatory ventilation (PCSIMV). MEASUREMENTS AND MAIN RESULTS: Measurements of airway flow, airway pressure and instantaneous CO2 concentration using a mainstream CO2 analyzer were made at the mouth during application of the Test-Breaths. Test-Breaths were analyzed in terms of tidal volume, expiratory time constant and series dead space. Using this data an initial ventilation pattern consisting of respiratory frequency and tidal volume was calculated. This ventilation pattern was compared to the one measured prior to the onset of the study using a two-tailed paired t-test. Additionally, it was compared to a conventional method for setting up ventilators. The computer-proposed ventilation pattern did not differ significantly from the actual pattern (p > 0.05), while the conventional method did. However the scatter was large and in 6 cases deviations in the minute ventilation of more than 50% were observed. CONCLUSIONS: The analysis of standardized Test Breaths allows automatic determination of an initial ventilation pattern for intubated ICU patients. While this pattern does not seem to be superior to the one chosen by the conventional method, it is derived fully automatically and without need for manual patient data entry such as weight or height. This makes the method potentially useful as a start up procedure for closed-loop controlled ventilation.
Resumo:
The research reported in this series of article aimed at (1) automating the search of questioned ink specimens in ink reference collections and (2) at evaluating the strength of ink evidence in a transparent and balanced manner. These aims require that ink samples are analysed in an accurate and reproducible way and that they are compared in an objective and automated way. This latter requirement is due to the large number of comparisons that are necessary in both scenarios. A research programme was designed to (a) develop a standard methodology for analysing ink samples in a reproducible way, (b) comparing automatically and objectively ink samples and (c) evaluate the proposed methodology in forensic contexts. This report focuses on the last of the three stages of the research programme. The calibration and acquisition process and the mathematical comparison algorithms were described in previous papers [C. Neumann, P. Margot, New perspectives in the use of ink evidence in forensic science-Part I: Development of a quality assurance process for forensic ink analysis by HPTLC, Forensic Sci. Int. 185 (2009) 29-37; C. Neumann, P. Margot, New perspectives in the use of ink evidence in forensic science-Part II: Development and testing of mathematical algorithms for the automatic comparison of ink samples analysed by HPTLC, Forensic Sci. Int. 185 (2009) 38-50]. In this paper, the benefits and challenges of the proposed concepts are tested in two forensic contexts: (1) ink identification and (2) ink evidential value assessment. The results show that different algorithms are better suited for different tasks. This research shows that it is possible to build digital ink libraries using the most commonly used ink analytical technique, i.e. high-performance thin layer chromatography, despite its reputation of lacking reproducibility. More importantly, it is possible to assign evidential value to ink evidence in a transparent way using a probabilistic model. It is therefore possible to move away from the traditional subjective approach, which is entirely based on experts' opinion, and which is usually not very informative. While there is room for the improvement, this report demonstrates the significant gains obtained over the traditional subjective approach for the search of ink specimens in ink databases, and the interpretation of their evidential value.
Resumo:
BACKGROUND: In a simulation based on a pharmacokinetic model we demonstrated that increasing the erythropoiesis stimulating agents (ESAs) half-life or shortening their administration interval decreases hemoglobin variability. The benefit of reducing the administration interval was however lessened by the variability induced by more frequent dosage adjustments. The purpose of this study was to analyze the reticulocyte and hemoglobin kinetics and variability under different ESAs and administration intervals in a collective of chronic hemodialysis patients. METHODS: The study was designed as an open-label, randomized, four-period cross-over investigation, including 30 patients under chronic hemodialysis at the regional hospital of Locarno (Switzerland) in February 2010 and lasting 2 years. Four subcutaneous treatment strategies (C.E.R.A. every 4 weeks Q4W and every 2 weeks Q2W, Darbepoetin alfa Q4W and Q2W) were compared with each other. The mean square successive difference of hemoglobin, reticulocyte count and ESAs dose was used to quantify variability. We distinguished a short- and a long-term variability based respectively on the weekly and monthly successive difference. RESULTS: No difference was found in the mean values of biological parameters (hemoglobin, reticulocytes, and ferritin) between the 4 strategies. ESAs type did not affect hemoglobin and reticulocyte variability, but C.E.R.A induced a more sustained reticulocytes response over time and increased the risk of hemoglobin overshooting (OR 2.7, p = 0.01). Shortening the administration interval lessened the amplitude of reticulocyte count fluctuations but resulted in more frequent ESAs dose adjustments and in amplified reticulocyte and hemoglobin variability. Q2W administration interval was however more favorable in terms of ESAs dose, allowing a 38% C.E.R.A. dose reduction, and no increase of Darbepoetin alfa. CONCLUSIONS: The reticulocyte dynamic was a more sensitive marker of time instability of the hemoglobin response under ESAs therapy. The ESAs administration interval had a greater impact on hemoglobin variability than the ESAs type. The more protracted reticulocyte response induced by C.E.R.A. could explain both, the observed higher risk of overshoot and the significant increase in efficacy when shortening its administration interval.Trial registrationClinicalTrials.gov NCT01666301.
Resumo:
Capillary electrophoresis has drawn considerable attention in the past few years, particularly in the field of chiral separations because of its high separation efficiency. However, its routine use in therapeutic drug monitoring is hampered by its low sensitivity due to a short optical path. We have developed a capillary zone electrophoresis (CZE) method using 2mM of hydroxypropyl-β-cyclodextrin as a chiral selector, which allows base-to-base separation of the enantiomers of mianserin (MIA), desmethylmianserin (DMIA), and 8-hydroxymianserin (OHMIA). Through the use of an on-column sample concentration step after liquid-liquid extraction from plasma and through the presence of an internal standard, the quantitation limits were found to be 5 ng/mL for each enantiomer of MIA and DMIA and 15 ng/mL for each enantiomer of OHMIA. To our knowledge, this is the first published CE method that allows its use for therapeutic monitoring of antidepressants due to its sensitivity down to the low nanogram range. The variability of the assays, as assessed by the coefficients of variation (CV) measured at two concentrations for each substance, ranged from 2 to 14% for the intraday (eight replicates) and from 5 to 14% for the interday (eight replicates) experiments. The deviations from the theoretical concentrations, which represent the accuracy of the method, were all within 12.5%. A linear response was obtained for all compounds within the range of concentrations used for the calibration curves (10-150 ng/mL for each enantiomer of MIA and DMIA and 20-300 ng/mL for each enantiomer of OHMIA). Good correlations were calculated between [(R) + (S)]-MIA and DMIA concentrations measured in plasma samples of 20 patients by a nonchiral gas chromatography method and CZE, and between the (R)- and (S)-concentrations of MIA and DMIA measured in plasma samples of 37 patients by a previously described chiral high-performance liquid chromatography method and CZE. Finally, no interference was noted from more than 20 other psychotropic drugs. Thus, this method, which is both sensitive and selective, can be routinely used for therapeutic monitoring of the enantiomers of MIA and its metabolites. It could be very useful due to the demonstrated interindividual variability of the stereoselective metabolism of MIA.
Resumo:
Understanding the influence of pore space characteristics on the hydraulic conductivity and spectral induced polarization (SIP) response is critical for establishing relationships between the electrical and hydrological properties of surficial unconsolidated sedimentary deposits, which host the bulk of the world's readily accessible groundwater resources. Here, we present the results of laboratory SIP measurements on industrial-grade, saturated quartz samples with granulometric characteristics ranging from fine sand to fine gravel, which can be regarded as proxies for widespread alluvial deposits. We altered the pore space characteristics by changing (i) the grain size spectra, (ii) the degree of compaction, and (iii) the level of sorting. We then examined how these changes affect the SIP response, the hydraulic conductivity, and the specific surface area of the considered samples. In general, the results indicate a clear connection between the SIP response and the granulometric as well as pore space characteristics. In particular, we observe a systematic correlation between the hydraulic conductivity and the relaxation time of the Cole-Cole model describing the observed SIP effect for the entire range of considered grain sizes. The results do, however, also indicate that the detailed nature of these relations depends strongly on variations in the pore space characteristics, such as, for example, the degree of compaction. The results of this study underline the complexity of the origin of the SIP signal as well as the difficulty to relate it to a single structural factor of a studied sample, and hence raise some fundamental questions with regard to the practical use of SIP measurements as site- and/or sample-independent predictors of the hydraulic conductivity.
Resumo:
Body temperature of the European water-shrew Neomys fodiens was reinvestigated with intraperitoneally implanted radiotransmitters. Two animals, caged in outdoor conditions, were tested during February and March. Mean body temperature (Tb) during rest was 37.0°C, during activity 37.5°C. During stress of capture Tb increased to 38.4°C, and during a social confrontation mean Tb was 39.4°C. During forced swimming Tb decreased at a rate of 1.1°C per minute in an animal with wet fur. However, when kept in adequate conditions, animals could maintain their body temperature at a level of about 37°C in most of the tested situations. In water of 2.6°C, mean Tb after 6 min of forced swimming or diving was 37.4°C, comparable to Tb terrestrial activity. In these animals the fur remained dry even on its surface. The pelt of these shrews has a hydrophobic property which seems to be unique compared to other semiaquatic mammals.
Resumo:
The research reported in this series of article aimed at (1) automating the search of questioned ink specimens in ink reference collections and (2) at evaluating the strength of ink evidence in a transparent and balanced manner. These aims require that ink samples are analysed in an accurate and reproducible way and that they are compared in an objective and automated way. This latter requirement is due to the large number of comparisons that are necessary in both scenarios. A research programme was designed to (a) develop a standard methodology for analysing ink samples in a reproducible way, (b) comparing automatically and objectively ink samples and (c) evaluate the proposed methodology in forensic contexts. This report focuses on the last of the three stages of the research programme. The calibration and acquisition process and the mathematical comparison algorithms were described in previous papers [C. Neumann, P. Margot, New perspectives in the use of ink evidence in forensic science-Part I: Development of a quality assurance process for forensic ink analysis by HPTLC, Forensic Sci. Int. 185 (2009) 29-37; C. Neumann, P. Margot, New perspectives in the use of ink evidence in forensic science-Part II: Development and testing of mathematical algorithms for the automatic comparison of ink samples analysed by HPTLC, Forensic Sci. Int. 185 (2009) 38-50]. In this paper, the benefits and challenges of the proposed concepts are tested in two forensic contexts: (1) ink identification and (2) ink evidential value assessment. The results show that different algorithms are better suited for different tasks. This research shows that it is possible to build digital ink libraries using the most commonly used ink analytical technique, i.e. high-performance thin layer chromatography, despite its reputation of lacking reproducibility. More importantly, it is possible to assign evidential value to ink evidence in a transparent way using a probabilistic model. It is therefore possible to move away from the traditional subjective approach, which is entirely based on experts' opinion, and which is usually not very informative. While there is room for the improvement, this report demonstrates the significant gains obtained over the traditional subjective approach for the search of ink specimens in ink databases, and the interpretation of their evidential value.
Resumo:
The objective of this work was to evaluate the effect of pond management on fish feed, growth, yield, survival, and water and effluent quality, during tambaqui (Colossoma macropomum) juvenile production. Fish were distributed in nine 600 m² earthen ponds, at a density of 8 fish per m²; the rearing period was 60 days. Three different pond management were applied: limed and fertilized (LimFer), limed (Lim), and natural (Nat). Fish were fed with a commercial ration containing 34% crude protein three times daily. There were no significant differences in fish growth or yield. Three main items found in tambaqui stomach were insect, zooplankton and ration, without a significant difference among treatments in proportion. Alkalinity, hardness, and CO2 were greater in LimFer and Lim ponds. Chlorophyll a, transparency, ammonia, nitrite, temperature, and dissolved oxygen of pond water were not significantly different among treatments. Biochemical oxygen demand, total phosphorus, orthophosphate, ammonia, and nitrite were significantly greater in effluents from LimFer ponds. Pond fertilization should be avoided, because growth and yield were similar among the three pond management systems tested; besides, it produces a more impacting effluent.
Resumo:
In this study, we enlarged our previous investigation focusing on the biodiversity of chlamydiae and amoebae in a drinking water treatment plant, by the inclusion of two additional plants and by searching also for the presence of legionellae and mycobacteria. Autochthonous amoebae were recovered onto non-nutritive agar, identified by 18S rRNA gene sequencing, and screened for the presence of bacterial endosymbionts. Bacteria were also searched for by Acanthamoeba co-culture. From a total of 125 samples, we recovered 38 amoebae, among which six harboured endosymbionts (three chlamydiae and three legionellae). In addition, we recovered by amoebal co-culture 11 chlamydiae, 36 legionellae (no L. pneumophila), and 24 mycobacteria (all rapid-growers). Two plants presented a similar percentage of samples positive for chlamydiae (11%), mycobacteria (20%) and amoebae (27%), whereas in the third plant the number of recovered bacteria was almost twice higher. Each plant exhibited a relatively high specific microbiota. Amoebae were mainly represented by various Naegleria species, Acanthamoeba species and Hartmannella vermiformis. Parachlamydiaceae were the most abundant chlamydiae (8 strains in total), and in this study we recovered a new genus-level strain, along with new chlamydiae previously reported. Similarly, about 66% of the recovered legionellae and 47% of the isolated mycobacteria could represent new species. Our work highlighted a high species diversity among legionellae and mycobacteria, dominated by putative new species, and it confirmed the presence of chlamydiae in these artificial water systems.
Resumo:
Obtaining the desired dry weight in dialysis patients is challenging once residual diuresis has disappeared, considering the trend of increasing dietary salt intake and shortening dialysis time over the last 40 years. We describe the case of a 55-year-old patient of Sudanese origin, who presented excessive interdialytic weight gain and hypertension on maintenance hemodialysis. After failure of conservative measures, a therapy of daily hot water baths of 30minutes each on non-dialysis days was introduced. All clinical parameters improved, including potassium profile. In this article, we review the history, pathophysiological mechanisms, efficacy and possible side effects of this interesting, somewhat forgotten technique.
Resumo:
At a time when disciplined inference and decision making under uncertainty represent common aims to participants in legal proceedings, the scientific community is remarkably heterogenous in its attitudes as to how these goals ought to be achieved. Probability and decision theory exert a considerable influence, and we think by all reason rightly do so, but they go against a mainstream of thinking that does not embrace-or is not aware of-the 'normative' character of this body of theory. It is normative, in the sense understood in this article, in that it prescribes particular properties, typically (logical) coherence, to which reasoning and decision making ought to conform. Disregarding these properties can result in diverging views which are occasionally used as an argument against the theory, or as a pretext for not following it. Typical examples are objections according to which people, both in everyday life but also individuals involved at various levels in the judicial process, find the theory difficult to understand and to apply. A further objection is that the theory does not reflect how people actually behave. This article aims to point out in what sense these examples misinterpret the analytical framework in its normative perspective. Through examples borrowed mostly from forensic science contexts, it is argued that so-called intuitive scientific attitudes are particularly liable to such misconceptions. These attitudes are contrasted with a statement of the actual liberties and constraints of probability and decision theory and the view according to which this theory is normative.
Resumo:
Résumé Ce travail de thèse étudie des moyens de formalisation permettant d'assister l'expert forensique dans la gestion des facteurs influençant l'évaluation des indices scientifiques, tout en respectant des procédures d'inférence établies et acceptables. Selon une vue préconisée par une partie majoritaire de la littérature forensique et juridique - adoptée ici sans réserve comme point de départ - la conceptualisation d'une procédure évaluative est dite 'cohérente' lors qu'elle repose sur une implémentation systématique de la théorie des probabilités. Souvent, par contre, la mise en oeuvre du raisonnement probabiliste ne découle pas de manière automatique et peut se heurter à des problèmes de complexité, dus, par exemple, à des connaissances limitées du domaine en question ou encore au nombre important de facteurs pouvant entrer en ligne de compte. En vue de gérer ce genre de complications, le présent travail propose d'investiguer une formalisation de la théorie des probabilités au moyen d'un environment graphique, connu sous le nom de Réseaux bayesiens (Bayesian networks). L'hypothèse principale que cette recherche envisage d'examiner considère que les Réseaux bayesiens, en concert avec certains concepts accessoires (tels que des analyses qualitatives et de sensitivité), constituent une ressource clé dont dispose l'expert forensique pour approcher des problèmes d'inférence de manière cohérente, tant sur un plan conceptuel que pratique. De cette hypothèse de travail, des problèmes individuels ont été extraits, articulés et abordés dans une série de recherches distinctes, mais interconnectées, et dont les résultats - publiés dans des revues à comité de lecture - sont présentés sous forme d'annexes. D'un point de vue général, ce travail apporte trois catégories de résultats. Un premier groupe de résultats met en évidence, sur la base de nombreux exemples touchant à des domaines forensiques divers, l'adéquation en termes de compatibilité et complémentarité entre des modèles de Réseaux bayesiens et des procédures d'évaluation probabilistes existantes. Sur la base de ces indications, les deux autres catégories de résultats montrent, respectivement, que les Réseaux bayesiens permettent également d'aborder des domaines auparavant largement inexplorés d'un point de vue probabiliste et que la disponibilité de données numériques dites 'dures' n'est pas une condition indispensable pour permettre l'implémentation des approches proposées dans ce travail. Le présent ouvrage discute ces résultats par rapport à la littérature actuelle et conclut en proposant les Réseaux bayesiens comme moyen d'explorer des nouvelles voies de recherche, telles que l'étude de diverses formes de combinaison d'indices ainsi que l'analyse de la prise de décision. Pour ce dernier aspect, l'évaluation des probabilités constitue, dans la façon dont elle est préconisée dans ce travail, une étape préliminaire fondamentale de même qu'un moyen opérationnel.