994 resultados para Modeling Methodology
Resumo:
We formulate a new mixing model to explore hydrological and chemical conditions under which the interface between the stream and catchment interface (SCI) influences the release of reactive solutes into stream water during storms. Physically, the SCI corresponds to the hyporheic/riparian sediments. In the new model this interface is coupled through a bidirectional water exchange to the conventional two components mixing model. Simulations show that the influence of the SCI on stream solute dynamics during storms is detectable when the runoff event is dominated by the infiltrated groundwater component that flows through the SCI before entering the stream and when the flux of solutes released from SCI sediments is similar to, or higher than, the solute flux carried by the groundwater. Dissolved organic carbon (DOC) and nitrate data from two small Mediterranean streams obtained during storms are compared to results from simulations using the new model to discern the circumstances under which the SCI is likely to control the dynamics of reactive solutes in streams. The simulations and the comparisons with empirical data suggest that the new mixing model may be especially appropriate for streams in which the periodic, or persistent, abrupt changes in the level of riparian groundwater exert hydrologic control on flux of biologically reactive fluxes between the riparian/hyporheic compartment and the stream water.
Resumo:
Exposure to various pesticides has been characterized in workers and the general population, but interpretation and assessment of biomonitoring data from a health risk perspective remains an issue. For workers, a Biological Exposure Index (BEI®) has been proposed for some substances, but most BEIs are based on urinary biomarker concentrations at Threshold Limit Value - Time Weighted Average (TLV-TWA) airborne exposure while occupational exposure can potentially occurs through multiple routes, particularly by skin contact (i.e.captan, chlorpyrifos, malathion). Similarly, several biomonitoring studies have been conducted to assess environmental exposure to pesticides in different populations, but dose estimates or health risks related to these environmental exposures (mainly through the diet), were rarely characterized. Recently, biological reference values (BRVs) in the form of urinary pesticide metabolites have been proposed for both occupationally exposed workers and children. These BRVs were established using toxicokinetic models developed for each substance, and correspond to safe levels of absorption in humans, regardless of the exposure scenario. The purpose of this chapter is to present a review of a toxicokinetic modeling approach used to determine biological reference values. These are then used to facilitate health risk assessments and decision-making on occupational and environmental pesticide exposures. Such models have the ability to link absorbed dose of the parent compound to exposure biomarkers and critical biological effects. To obtain the safest BRVs for the studied population, simulations of exposure scenarios were performed using a conservative reference dose such as a no-observed-effect level (NOEL). The various examples discussed in this chapter show the importance of knowledge on urine collections (i.e. spot samples and complete 8-h, 12-h or 24-h collections), sampling strategies, metabolism, relative proportions of the different metabolites in urine, absorption fraction, route of exposure and background contribution of prior exposures. They also show that relying on urinary measurements of specific metabolites appears more accurate when applying this approach to the case of occupational exposures. Conversely, relying on semi-specific metabolites (metabolites common to a category of pesticides) appears more accurate for the health risk assessment of environmental exposures given that the precise pesticides to which subjects are exposed are often unknown. In conclusion, the modeling approach to define BRVs for the relevant pesticides may be useful for public health authorities for managing issues related to health risks resulting from environmental and occupational exposures to pesticides.
Resumo:
The present research deals with an important public health threat, which is the pollution created by radon gas accumulation inside dwellings. The spatial modeling of indoor radon in Switzerland is particularly complex and challenging because of many influencing factors that should be taken into account. Indoor radon data analysis must be addressed from both a statistical and a spatial point of view. As a multivariate process, it was important at first to define the influence of each factor. In particular, it was important to define the influence of geology as being closely associated to indoor radon. This association was indeed observed for the Swiss data but not probed to be the sole determinant for the spatial modeling. The statistical analysis of data, both at univariate and multivariate level, was followed by an exploratory spatial analysis. Many tools proposed in the literature were tested and adapted, including fractality, declustering and moving windows methods. The use of Quan-tité Morisita Index (QMI) as a procedure to evaluate data clustering in function of the radon level was proposed. The existing methods of declustering were revised and applied in an attempt to approach the global histogram parameters. The exploratory phase comes along with the definition of multiple scales of interest for indoor radon mapping in Switzerland. The analysis was done with a top-to-down resolution approach, from regional to local lev¬els in order to find the appropriate scales for modeling. In this sense, data partition was optimized in order to cope with stationary conditions of geostatistical models. Common methods of spatial modeling such as Κ Nearest Neighbors (KNN), variography and General Regression Neural Networks (GRNN) were proposed as exploratory tools. In the following section, different spatial interpolation methods were applied for a par-ticular dataset. A bottom to top method complexity approach was adopted and the results were analyzed together in order to find common definitions of continuity and neighborhood parameters. Additionally, a data filter based on cross-validation was tested with the purpose of reducing noise at local scale (the CVMF). At the end of the chapter, a series of test for data consistency and methods robustness were performed. This lead to conclude about the importance of data splitting and the limitation of generalization methods for reproducing statistical distributions. The last section was dedicated to modeling methods with probabilistic interpretations. Data transformation and simulations thus allowed the use of multigaussian models and helped take the indoor radon pollution data uncertainty into consideration. The catego-rization transform was presented as a solution for extreme values modeling through clas-sification. Simulation scenarios were proposed, including an alternative proposal for the reproduction of the global histogram based on the sampling domain. The sequential Gaussian simulation (SGS) was presented as the method giving the most complete information, while classification performed in a more robust way. An error measure was defined in relation to the decision function for data classification hardening. Within the classification methods, probabilistic neural networks (PNN) show to be better adapted for modeling of high threshold categorization and for automation. Support vector machines (SVM) on the contrary performed well under balanced category conditions. In general, it was concluded that a particular prediction or estimation method is not better under all conditions of scale and neighborhood definitions. Simulations should be the basis, while other methods can provide complementary information to accomplish an efficient indoor radon decision making.
Resumo:
The Agenda 21 for the Geneva region is the results from a broad consultation process including all local actors. The article 12 stipulates that « the State facilitates possible synergies between economic activities in order to minimize their environmental impacts » thus opening the way for Industrial Ecology (IE) and Industrial Symbiosis (IS). An Advisory Board for Industrial Ecology and Industrial Symbiosis implementation was established in 2002 involving relevant government agencies. Regulatory and technical conditions for IS are studied in the Swiss context. Results reveal that the Swiss law on waste does not hinder by-product exchanges. Methodology and technical factors including geographic, qualitative, quantitative and economical aspects are detailed. The competition with waste operators in a highly developed recycling system is also tackled.The IS project develops an empirical and systematic method for detecting and implementing by-products synergies between industrial actors disseminated throughout the Geneva region. Database management tool for the treatment of input-output analysis data and GIS tools for detecting potentials industrial partners are constantly improved. Potential symbioses for 17 flows (including energy, water and material flows) are currently studied for implementation.
Resumo:
The objective of this work was to develop a low-cost portable damage detection tool to assess and predict damage areas in highway bridges. The proposed tool was based on standard vibration-based damage identification (VBDI) techniques but was extended to a new approach based on operational traffic load. The methodology was tested using numerical simulations, laboratory experiments, and field testing.
Resumo:
A critical issue in brain energy metabolism is whether lactate produced within the brain by astrocytes is taken up and metabolized by neurons upon activation. Although there is ample evidence that neurons can efficiently use lactate as an energy substrate, at least in vitro, few experimental data exist to indicate that it is indeed the case in vivo. To address this question, we used a modeling approach to determine which mechanisms are necessary to explain typical brain lactate kinetics observed upon activation. On the basis of a previously validated model that takes into account the compartmentalization of energy metabolism, we developed a mathematical model of brain lactate kinetics, which was applied to published data describing the changes in extracellular lactate levels upon activation. Results show that the initial dip in the extracellular lactate concentration observed at the onset of stimulation can only be satisfactorily explained by a rapid uptake within an intraparenchymal cellular compartment. In contrast, neither blood flow increase, nor extracellular pH variation can be major causes of the lactate initial dip, whereas tissue lactate diffusion only tends to reduce its amplitude. The kinetic properties of monocarboxylate transporter isoforms strongly suggest that neurons represent the most likely compartment for activation-induced lactate uptake and that neuronal lactate utilization occurring early after activation onset is responsible for the initial dip in brain lactate levels observed in both animals and humans.
Resumo:
The unifying objective of Phases I and II of this study was to determine the feasibility of the post-tensioning strengthening method and to implement the technique on two composite bridges in Iowa. Following completion of these two phases, Phase III was undertaken and is documented in this report. The basic objectives of Phase III were further monitoring bridge behavior (both during and after post-tensioning) and developing a practical design methodology for designing the strengthening system under investigation. Specific objectives were: to develop strain and force transducers to facilitate the collection of field data; to investigate further the existence and effects of the end restraint on the post-tensioning process; to determine the amount of post-tensioning force loss that occurred during the time between the initial testing and the retesting of the existing bridges; to determine the significance of any temporary temperature-induced post-tensioning force change; and to develop a simplified design methodology that would incorporate various variables such as span length, angle-of-skew, beam spacing, and concrete strength. Experimental field results obtained during Phases II and III were compared to the theoretical results and to each other. Conclusions from this research are as follows: (1) Strengthening single-span composite bridges by post-tensioning is a viable, economical strengthening technique. (2) Behavior of both bridges was similar to the behavior observed from the bridges during field tests conducted under Phase II. (3) The strain transducers were very accurate at measuring mid-span strain. (4) The force transducers gave excellent results under laboratory conditions, but were found to be less effective when used in actual bridge tests. (5) Loss of post-tensioning force due to temperature effects in any particular steel beam post-tensioning tendon system were found to be small. (6) Loss of post-tensioning force over a two-year period was minimal. (7) Significant end restraint was measured in both bridges, caused primarily by reinforcing steel being continuous from the deck into the abutments. This end restraint reduced the effectiveness of the post-tensioning but also reduced midspan strains due to truck loadings. (8) The SAP IV finite element model is capable of accurately modeling the behavior of a post-tensioned bridge, if guardrails and end restraints are included in the model. (9) Post-tensioning distribution should be separated into distributions for the axial force and moment components of an eccentric post-tensioning force. (10) Skews of 45 deg or less have a minor influence on post-tensioning distribution. (11) For typical Iowa three-beam and four-beam composite bridges, simple regression-derived formulas for force and moment fractions can be used to estimate post-tensioning distribution at midspan. At other locations, a simple linear interpolation gives approximately correct results. (12) A simple analytical model can accurately estimate the flexural strength of an isolated post-tensioned composite beam.
Resumo:
Pharmacokinetic variability in drug levels represent for some drugs a major determinant of treatment success, since sub-therapeutic concentrations might lead to toxic reactions, treatment discontinuation or inefficacy. This is true for most antiretroviral drugs, which exhibit high inter-patient variability in their pharmacokinetics that has been partially explained by some genetic and non-genetic factors. The population pharmacokinetic approach represents a very useful tool for the description of the dose-concentration relationship, the quantification of variability in the target population of patients and the identification of influencing factors. It can thus be used to make predictions and dosage adjustment optimization based on Bayesian therapeutic drug monitoring (TDM). This approach has been used to characterize the pharmacokinetics of nevirapine (NVP) in 137 HIV-positive patients followed within the frame of a TDM program. Among tested covariates, body weight, co-administration of a cytochrome (CYP) 3A4 inducer or boosted atazanavir as well as elevated aspartate transaminases showed an effect on NVP elimination. In addition, genetic polymorphism in the CYP2B6 was associated with reduced NVP clearance. Altogether, these factors could explain 26% in NVP variability. Model-based simulations were used to compare the adequacy of different dosage regimens in relation to the therapeutic target associated with treatment efficacy. In conclusion, the population approach is very useful to characterize the pharmacokinetic profile of drugs in a population of interest. The quantification and the identification of the sources of variability is a rational approach to making optimal dosage decision for certain drugs administered chronically.
Resumo:
The following paper introduces a new approach to the analysis of offensive game in football. Therefore, the main aim of this study was to create an instrument for collecting information for the analysis of offensive action and interactions game. The observation instrument that was used to accomplish the main objective of this work consists of a combination of format fields (FC) and systems of categories (SC). This methodology is a particular strategy of the scientific method that has as an objective to analyse the perceptible behaviour that occurs in habitual contexts, allowing them to be formally recorded and quantified and using an ad hoc instrument in order to obtain a behaviour systematic registration that, since they have been transformed in quantitative data with the necessary reliability and validity determined level, will allow analysis of the relations between these behaviours. The codifications undertaken to date in various games of football have shown that it serves the purposes for which it was developed, allowing more research into the offensive game methods in football.