116 resultados para Generalised Linear Modelling
Resumo:
Many three-dimensional (3-D) structures in rock, which formed during the deformation of the Earth's crust and lithosphere, are controlled by a difference in mechanical strength between rock units and are often the result of a geometrical instability. Such structures are, for example, folds, pinch-and-swell structures (due to necking) or cuspate-lobate structures (mullions). These struc-tures occur from the centimeter to the kilometer scale and the related deformation processes con-trol the formation of, for example, fold-and-thrust belts and extensional sedimentary basins or the deformation of the basement-cover interface. The 2-D deformation processes causing these structures are relatively well studied, however, several processes during large-strain 3-D defor-mation are still incompletely understood. One of these 3-D processes is the lateral propagation of these structures, such as fold and cusp propagation in a direction orthogonal to the shortening direction or neck propagation in direction orthogonal to the extension direction. Especially, we are interested in fold nappes which are recumbent folds with amplitudes usually exceeding 10 km and they have been presumably formed by ductile shearing. They often exhibit a constant sense of shearing and a non-linear increase of shear strain towards their overturned limb. The fold axes of the Morcles fold nappe in western Switzerland plunges to the ENE whereas the fold axes in the more eastern Doldenhorn nappe plunges to the WSW. These opposite plunge direc-tions characterize the Rawil depression (Wildstrubel depression). The Morcles nappe is mainly the result of layer parallel contraction and shearing. During the compression the massive lime-stones were more competent than the surrounding marls and shales, which led to the buckling characteristics of the Morcles nappe, especially in the north-dipping normal limb. The Dolden-horn nappe exhibits only a minor overturned fold limb. There are still no 3-D numerical studies which investigate the fundamental dynamics of the formation of the large-scale 3-D structure including the Morcles and Doldenhorn nappes and the related Rawil depression. We study the 3-D evolution of geometrical instabilities and fold nappe formation with numerical simulations based on the finite element method (FEM). Simulating geometrical instabilities caused by sharp variations of mechanical strength between rock units requires a numerical algorithm that can accurately resolve material interfaces for large differences in material properties (e.g. between limestone and shale) and for large deformations. Therefore, our FE algorithm combines a nu-merical contour-line technique and a deformable Lagrangian mesh with re-meshing. With this combined method it is possible to accurately follow the initial material contours with the FE mesh and to accurately resolve the geometrical instabilities. The algorithm can simulate 3-D de-formation for a visco-elastic rheology. The viscous rheology is described by a power-law flow law. The code is used to study the 3-D fold nappe formation, the lateral propagation of folding and also the lateral propagation of cusps due to initial half graben geometry. Thereby, the small initial geometrical perturbations for folding and necking are exactly followed by the FE mesh, whereas the initial large perturbation describing a half graben is defined by a contour line inter-secting the finite elements. Further, the 3-D algorithm is applied to 3-D viscous nacking during slab detachment. The results from various simulations are compared with 2-D resulats and a 1-D analytical solution. -- On retrouve beaucoup de structures en 3 dimensions (3-D) dans les roches qui ont pour origines une déformation de la lithosphère terrestre. Ces structures sont par exemple des plis, des boudins (pinch-and-swell) ou des mullions (cuspate-lobate) et sont présentés de l'échelle centimétrique à kilométrique. Mécaniquement, ces structures peuvent être expliquées par une différence de résistance entre les différentes unités de roches et sont généralement le fruit d'une instabilité géométrique. Ces différences mécaniques entre les unités contrôlent non seulement les types de structures rencontrées, mais également le type de déformation (thick skin, thin skin) et le style tectonique (bassin d'avant pays, chaîne d'avant pays). Les processus de la déformation en deux dimensions (2-D) formant ces structures sont relativement bien compris. Cependant, lorsque l'on ajoute la troisiéme dimension, plusieurs processus ne sont pas complètement compris lors de la déformation à large échelle. L'un de ces processus est la propagation latérale des structures, par exemple la propagation de plis ou de mullions dans la direction perpendiculaire à l'axe de com-pression, ou la propagation des zones d'amincissement des boudins perpendiculairement à la direction d'extension. Nous sommes particulièrement intéressés les nappes de plis qui sont des nappes de charriage en forme de plis couché d'une amplitude plurikilométrique et étant formées par cisaillement ductile. La plupart du temps, elles exposent un sens de cisaillement constant et une augmentation non linéaire de la déformation vers la base du flanc inverse. Un exemple connu de nappes de plis est le domaine Helvétique dans les Alpes de l'ouest. Une de ces nap-pes est la Nappe de Morcles dont l'axe de pli plonge E-NE tandis que de l'autre côté de la dépression du Rawil (ou dépression du Wildstrubel), la nappe du Doldenhorn (équivalent de la nappe de Morcles) possède un axe de pli plongeant O-SO. La forme particulière de ces nappes est due à l'alternance de couches calcaires mécaniquement résistantes et de couches mécanique-ment faibles constituées de schistes et de marnes. Ces différences mécaniques dans les couches permettent d'expliquer les plissements internes à la nappe, particulièrement dans le flanc inver-se de la nappe de Morcles. Il faut également noter que le développement du flanc inverse des nappes n'est pas le même des deux côtés de la dépression de Rawil. Ainsi la nappe de Morcles possède un important flanc inverse alors que la nappe du Doldenhorn en est presque dépour-vue. A l'heure actuelle, aucune étude numérique en 3-D n'a été menée afin de comprendre la dynamique fondamentale de la formation des nappes de Morcles et du Doldenhorn ainsi que la formation de la dépression de Rawil. Ce travail propose la première analyse de l'évolution 3-D des instabilités géométriques et de la formation des nappes de plis en utilisant des simulations numériques. Notre modèle est basé sur la méthode des éléments finis (FEM) qui permet de ré-soudre avec précision les interfaces entre deux matériaux ayant des propriétés mécaniques très différentes (par exemple entre les couches calcaires et les couches marneuses). De plus nous utilisons un maillage lagrangien déformable avec une fonction de re-meshing (production d'un nouveau maillage). Grâce à cette méthode combinée il nous est possible de suivre avec précisi-on les interfaces matérielles et de résoudre avec précision les instabilités géométriques lors de la déformation de matériaux visco-élastiques décrit par une rhéologie non linéaire (n>1). Nous uti-lisons cet algorithme afin de comprendre la formation des nappes de plis, la propagation latérale du plissement ainsi que la propagation latérale des structures de type mullions causé par une va-riation latérale de la géométrie (p.ex graben). De plus l'algorithme est utilisé pour comprendre la dynamique 3-D de l'amincissement visqueux et de la rupture de la plaque descendante en zone de subduction. Les résultats obtenus sont comparés à des modèles 2-D et à la solution analytique 1-D. -- Viele drei dimensionale (3-D) Strukturen, die in Gesteinen vorkommen und durch die Verfor-mung der Erdkruste und Litosphäre entstanden sind werden von den unterschiedlichen mechani-schen Eigenschaften der Gesteinseinheiten kontrolliert und sind häufig das Resulat von geome-trischen Istabilitäten. Zu diesen strukturen zählen zum Beispiel Falten, Pich-and-swell Struktu-ren oder sogenannte Cusbate-Lobate Strukturen (auch Mullions). Diese Strukturen kommen in verschiedenen Grössenordungen vor und können Masse von einigen Zentimeter bis zu einigen Kilometer aufweisen. Die mit der Entstehung dieser Strukturen verbundenen Prozesse kontrol-lieren die Entstehung von Gerbirgen und Sediment-Becken sowie die Verformung des Kontaktes zwischen Grundgebirge und Stedimenten. Die zwei dimensionalen (2-D) Verformungs-Prozesse die zu den genannten Strukturen führen sind bereits sehr gut untersucht. Einige Prozesse wäh-rend starker 3-D Verformung sind hingegen noch unvollständig verstanden. Einer dieser 3-D Prozesse ist die seitliche Fortpflanzung der beschriebenen Strukturen, so wie die seitliche Fort-pflanzung von Falten und Cusbate-Lobate Strukturen senkrecht zur Verkürzungsrichtung und die seitliche Fortpflanzung von Pinch-and-Swell Strukturen othogonal zur Streckungsrichtung. Insbesondere interessieren wir uns für Faltendecken, liegende Falten mit Amplituden von mehr als 10 km. Faltendecken entstehen vermutlich durch duktile Verscherung. Sie zeigen oft einen konstanten Scherungssinn und eine nicht-lineare zunahme der Scherverformung am überkipp-ten Schenkel. Die Faltenachsen der Morcles Decke in der Westschweiz fallen Richtung ONO während die Faltenachsen der östicher gelegenen Doldenhorn Decke gegen WSW einfallen. Diese entgegengesetzten Einfallrichtungen charakterisieren die Rawil Depression (Wildstrubel Depression). Die Morcles Decke ist überwiegend das Resultat von Verkürzung und Scherung parallel zu den Sedimentlagen. Während der Verkürzung verhielt sich der massive Kalkstein kompetenter als der Umliegende Mergel und Schiefer, was zur Verfaltetung Morcles Decke führ-te, vorallem in gegen Norden eifallenden überkippten Schenkel. Die Doldenhorn Decke weist dagegen einen viel kleineren überkippten Schenkel und eine stärkere Lokalisierung der Verfor-mung auf. Bis heute gibt es keine 3-D numerischen Studien, die die fundamentale Dynamik der Entstehung von grossen stark verformten 3-D Strukturen wie den Morcles und Doldenhorn Decken sowie der damit verbudenen Rawil Depression untersuchen. Wir betrachten die 3-D Ent-wicklung von geometrischen Instabilitäten sowie die Entstehung fon Faltendecken mit Hilfe von numerischen Simulationen basiert auf der Finite Elemente Methode (FEM). Die Simulation von geometrischen Instabilitäten, die aufgrund von Änderungen der Materialeigenschaften zwischen verschiedenen Gesteinseinheiten entstehen, erfortert einen numerischen Algorithmus, der in der Lage ist die Materialgrenzen mit starkem Kontrast der Materialeigenschaften (zum Beispiel zwi-schen Kalksteineinheiten und Mergel) für starke Verfomung genau aufzulösen. Um dem gerecht zu werden kombiniert unser FE Algorithmus eine numerische Contour-Linien-Technik und ein deformierbares Lagranges Netz mit Re-meshing. Mit dieser kombinierten Methode ist es mög-lich den anfänglichen Materialgrenzen mit dem FE Netz genau zu folgen und die geometrischen Instabilitäten genügend aufzulösen. Der Algorithmus ist in der Lage visko-elastische 3-D Ver-formung zu rechnen, wobei die viskose Rheologie mit Hilfe eines power-law Fliessgesetzes beschrieben wird. Mit dem numerischen Algorithmus untersuchen wir die Entstehung von 3-D Faltendecken, die seitliche Fortpflanzung der Faltung sowie der Cusbate-Lobate Strukturen die sich durch die Verkürzung eines mit Sediment gefüllten Halbgraben bilden. Dabei werden die anfänglichen geometrischen Instabilitäten der Faltung exakt mit dem FE Netz aufgelöst wäh-rend die Materialgranzen des Halbgrabens die Finiten Elemente durchschneidet. Desweiteren wird der 3-D Algorithmus auf die Einschnürung während der 3-D viskosen Plattenablösung und Subduktion angewandt. Die 3-D Resultate werden mit 2-D Ergebnissen und einer 1-D analyti-schen Lösung verglichen.
Resumo:
OBJECTIVE: To quantify the relation between body mass index (BMI) and endometrial cancer risk, and to describe the shape of such a relation. DESIGN: Pooled analysis of three hospital-based case-control studies. SETTING: Italy and Switzerland. POPULATION: A total of 1449 women with endometrial cancer and 3811 controls. METHODS: Multivariate odds ratios (OR) and 95% confidence intervals (95% CI) were obtained from logistic regression models. The shape of the relation was determined using a class of flexible regression models. MAIN OUTCOME MEASURE: The relation of BMI with endometrial cancer. RESULTS: Compared with women with BMI 18.5 to <25 kg/m(2) , the odds ratio was 5.73 (95% CI 4.28-7.68) for women with a BMI ≥35 kg/m(2) . The odds ratios were 1.10 (95% CI 1.09-1.12) and 1.63 (95% CI 1.52-1.75) respectively for an increment of BMI of 1 and 5 units. The relation was stronger in never-users of oral contraceptives (OR 3.35, 95% CI 2.78-4.03, for BMI ≥30 versus <25 kg/m(2) ) than in users (OR 1.22, 95% CI 0.56-2.67), and in women with diabetes (OR 8.10, 95% CI 4.10-16.01, for BMI ≥30 versus <25 kg/m(2) ) than in those without diabetes (OR 2.95, 95% CI 2.44-3.56). The relation was best fitted by a cubic model, although after the exclusion of the 5% upper and lower tails, it was best fitted by a linear model. CONCLUSIONS: The results of this study confirm a role of elevated BMI in the aetiology of endometrial cancer and suggest that the risk in obese women increases in a cubic nonlinear fashion. The relation was stronger in never-users of oral contraceptives and in women with diabetes. TWEETABLE ABSTRACT: Risk of endometrial cancer increases with elevated body weight in a cubic nonlinear fashion.
Resumo:
The algorithmic approach to data modelling has developed rapidly these last years, in particular methods based on data mining and machine learning have been used in a growing number of applications. These methods follow a data-driven methodology, aiming at providing the best possible generalization and predictive abilities instead of concentrating on the properties of the data model. One of the most successful groups of such methods is known as Support Vector algorithms. Following the fruitful developments in applying Support Vector algorithms to spatial data, this paper introduces a new extension of the traditional support vector regression (SVR) algorithm. This extension allows for the simultaneous modelling of environmental data at several spatial scales. The joint influence of environmental processes presenting different patterns at different scales is here learned automatically from data, providing the optimum mixture of short and large-scale models. The method is adaptive to the spatial scale of the data. With this advantage, it can provide efficient means to model local anomalies that may typically arise in situations at an early phase of an environmental emergency. However, the proposed approach still requires some prior knowledge on the possible existence of such short-scale patterns. This is a possible limitation of the method for its implementation in early warning systems. The purpose of this paper is to present the multi-scale SVR model and to illustrate its use with an application to the mapping of Cs137 activity given the measurements taken in the region of Briansk following the Chernobyl accident.
Resumo:
Studies evaluating the mechanical behavior of the trabecular microstructure play an important role in our understanding of pathologies such as osteoporosis, and in increasing our understanding of bone fracture and bone adaptation. Understanding of such behavior in bone is important for predicting and providing early treatment of fractures. The objective of this study is to present a numerical model for studying the initiation and accumulation of trabecular bone microdamage in both the pre- and post-yield regions. A sub-region of human vertebral trabecular bone was analyzed using a uniformly loaded anatomically accurate microstructural three-dimensional finite element model. The evolution of trabecular bone microdamage was governed using a non-linear, modulus reduction, perfect damage approach derived from a generalized plasticity stress-strain law. The model introduced in this paper establishes a history of microdamage evolution in both the pre- and post-yield regions
Resumo:
1. Model-based approaches have been used increasingly in conservation biology over recent years. Species presence data used for predictive species distribution modelling are abundant in natural history collections, whereas reliable absence data are sparse, most notably for vagrant species such as butterflies and snakes. As predictive methods such as generalized linear models (GLM) require absence data, various strategies have been proposed to select pseudo-absence data. However, only a few studies exist that compare different approaches to generating these pseudo-absence data. 2. Natural history collection data are usually available for long periods of time (decades or even centuries), thus allowing historical considerations. However, this historical dimension has rarely been assessed in studies of species distribution, although there is great potential for understanding current patterns, i.e. the past is the key to the present. 3. We used GLM to model the distributions of three 'target' butterfly species, Melitaea didyma, Coenonympha tullia and Maculinea teleius, in Switzerland. We developed and compared four strategies for defining pools of pseudo-absence data and applied them to natural history collection data from the last 10, 30 and 100 years. Pools included: (i) sites without target species records; (ii) sites where butterfly species other than the target species were present; (iii) sites without butterfly species but with habitat characteristics similar to those required by the target species; and (iv) a combination of the second and third strategies. Models were evaluated and compared by the total deviance explained, the maximized Kappa and the area under the curve (AUC). 4. Among the four strategies, model performance was best for strategy 3. Contrary to expectations, strategy 2 resulted in even lower model performance compared with models with pseudo-absence data simulated totally at random (strategy 1). 5. Independent of the strategy model, performance was enhanced when sites with historical species presence data were not considered as pseudo-absence data. Therefore, the combination of strategy 3 with species records from the last 100 years achieved the highest model performance. 6. Synthesis and applications. The protection of suitable habitat for species survival or reintroduction in rapidly changing landscapes is a high priority among conservationists. Model-based approaches offer planning authorities the possibility of delimiting priority areas for species detection or habitat protection. The performance of these models can be enhanced by fitting them with pseudo-absence data relying on large archives of natural history collection species presence data rather than using randomly sampled pseudo-absence data.
Resumo:
1. Statistical modelling is often used to relate sparse biological survey data to remotely derived environmental predictors, thereby providing a basis for predictively mapping biodiversity across an entire region of interest. The most popular strategy for such modelling has been to model distributions of individual species one at a time. Spatial modelling of biodiversity at the community level may, however, confer significant benefits for applications involving very large numbers of species, particularly if many of these species are recorded infrequently. 2. Community-level modelling combines data from multiple species and produces information on spatial pattern in the distribution of biodiversity at a collective community level instead of, or in addition to, the level of individual species. Spatial outputs from community-level modelling include predictive mapping of community types (groups of locations with similar species composition), species groups (groups of species with similar distributions), axes or gradients of compositional variation, levels of compositional dissimilarity between pairs of locations, and various macro-ecological properties (e.g. species richness). 3. Three broad modelling strategies can be used to generate these outputs: (i) 'assemble first, predict later', in which biological survey data are first classified, ordinated or aggregated to produce community-level entities or attributes that are then modelled in relation to environmental predictors; (ii) 'predict first, assemble later', in which individual species are modelled one at a time as a function of environmental variables, to produce a stack of species distribution maps that is then subjected to classification, ordination or aggregation; and (iii) 'assemble and predict together', in which all species are modelled simultaneously, within a single integrated modelling process. These strategies each have particular strengths and weaknesses, depending on the intended purpose of modelling and the type, quality and quantity of data involved. 4. Synthesis and applications. The potential benefits of modelling large multispecies data sets using community-level, as opposed to species-level, approaches include faster processing, increased power to detect shared patterns of environmental response across rarely recorded species, and enhanced capacity to synthesize complex data into a form more readily interpretable by scientists and decision-makers. Community-level modelling therefore deserves to be considered more often, and more widely, as a potential alternative or supplement to modelling individual species.
Resumo:
The role of land cover change as a significant component of global change has become increasingly recognized in recent decades. Large databases measuring land cover change, and the data which can potentially be used to explain the observed changes, are also becoming more commonly available. When developing statistical models to investigate observed changes, it is important to be aware that the chosen sampling strategy and modelling techniques can influence results. We present a comparison of three sampling strategies and two forms of grouped logistic regression models (multinomial and ordinal) in the investigation of patterns of successional change after agricultural land abandonment in Switzerland. Results indicated that both ordinal and nominal transitional change occurs in the landscape and that the use of different sampling regimes and modelling techniques as investigative tools yield different results. Synthesis and applications. Our multimodel inference identified successfully a set of consistently selected indicators of land cover change, which can be used to predict further change, including annual average temperature, the number of already overgrown neighbouring areas of land and distance to historically destructive avalanche sites. This allows for more reliable decision making and planning with respect to landscape management. Although both model approaches gave similar results, ordinal regression yielded more parsimonious models that identified the important predictors of land cover change more efficiently. Thus, this approach is favourable where land cover change pattern can be interpreted as an ordinal process. Otherwise, multinomial logistic regression is a viable alternative.
Resumo:
Computer simulations on a new model of the alpha1b-adrenergic receptor based on the crystal structure of rhodopsin have been combined with experimental mutagenesis to investigate the role of residues in the cytosolic half of helix 6 in receptor activation. Our results support the hypothesis that a salt bridge between the highly conserved arginine (R143(3.50)) of the E/DRY motif of helix 3 and a conserved glutamate (E289(6.30)) on helix 6 constrains the alpha1b-AR in the inactive state. In fact, mutations of E289(6.30) that weakened the R143(3.50)-E289(6.30) interaction constitutively activated the receptor. The functional effect of mutating other amino acids on helix 6 (F286(6.27), A292(6.33), L296(6.37), V299(6.40,) V300(6.41), and F303(6.44)) correlates with the extent of their interaction with helix 3 and in particular with R143(3.50) of the E/DRY sequence.
Resumo:
Résumé: L'évaluation de l'exposition aux nuisances professionnelles représente une étape importante dans l'analyse de poste de travail. Les mesures directes sont rarement utilisées sur les lieux même du travail et l'exposition est souvent estimée sur base de jugements d'experts. Il y a donc un besoin important de développer des outils simples et transparents, qui puissent aider les spécialistes en hygiène industrielle dans leur prise de décision quant aux niveaux d'exposition. L'objectif de cette recherche est de développer et d'améliorer les outils de modélisation destinés à prévoir l'exposition. Dans un premier temps, une enquête a été entreprise en Suisse parmi les hygiénistes du travail afin d'identifier les besoins (types des résultats, de modèles et de paramètres observables potentiels). Il a été constaté que les modèles d'exposition ne sont guère employés dans la pratique en Suisse, l'exposition étant principalement estimée sur la base de l'expérience de l'expert. De plus, l'émissions de polluants ainsi que leur dispersion autour de la source ont été considérés comme des paramètres fondamentaux. Pour tester la flexibilité et la précision des modèles d'exposition classiques, des expériences de modélisations ont été effectuées dans des situations concrètes. En particulier, des modèles prédictifs ont été utilisés pour évaluer l'exposition professionnelle au monoxyde de carbone et la comparer aux niveaux d'exposition répertoriés dans la littérature pour des situations similaires. De même, l'exposition aux sprays imperméabilisants a été appréciée dans le contexte d'une étude épidémiologique sur une cohorte suisse. Dans ce cas, certains expériences ont été entreprises pour caractériser le taux de d'émission des sprays imperméabilisants. Ensuite un modèle classique à deux-zone a été employé pour évaluer la dispersion d'aérosol dans le champ proche et lointain pendant l'activité de sprayage. D'autres expériences ont également été effectuées pour acquérir une meilleure compréhension des processus d'émission et de dispersion d'un traceur, en se concentrant sur la caractérisation de l'exposition du champ proche. Un design expérimental a été développé pour effectuer des mesures simultanées dans plusieurs points d'une cabine d'exposition, par des instruments à lecture directe. Il a été constaté que d'un point de vue statistique, la théorie basée sur les compartiments est sensée, bien que l'attribution à un compartiment donné ne pourrait pas se faire sur la base des simples considérations géométriques. Dans une étape suivante, des données expérimentales ont été collectées sur la base des observations faites dans environ 100 lieux de travail différents: des informations sur les déterminants observés ont été associées aux mesures d'exposition des informations sur les déterminants observés ont été associé. Ces différentes données ont été employées pour améliorer le modèle d'exposition à deux zones. Un outil a donc été développé pour inclure des déterminants spécifiques dans le choix du compartiment, renforçant ainsi la fiabilité des prévisions. Toutes ces investigations ont servi à améliorer notre compréhension des outils des modélisations ainsi que leurs limitations. L'intégration de déterminants mieux adaptés aux besoins des experts devrait les inciter à employer cet outil dans leur pratique. D'ailleurs, en augmentant la qualité des outils des modélisations, cette recherche permettra non seulement d'encourager leur utilisation systématique, mais elle pourra également améliorer l'évaluation de l'exposition basée sur les jugements d'experts et, par conséquent, la protection de la santé des travailleurs. Abstract Occupational exposure assessment is an important stage in the management of chemical exposures. Few direct measurements are carried out in workplaces, and exposures are often estimated based on expert judgements. There is therefore a major requirement for simple transparent tools to help occupational health specialists to define exposure levels. The aim of the present research is to develop and improve modelling tools in order to predict exposure levels. In a first step a survey was made among professionals to define their expectations about modelling tools (what types of results, models and potential observable parameters). It was found that models are rarely used in Switzerland and that exposures are mainly estimated from past experiences of the expert. Moreover chemical emissions and their dispersion near the source have also been considered as key parameters. Experimental and modelling studies were also performed in some specific cases in order to test the flexibility and drawbacks of existing tools. In particular, models were applied to assess professional exposure to CO for different situations and compared with the exposure levels found in the literature for similar situations. Further, exposure to waterproofing sprays was studied as part of an epidemiological study on a Swiss cohort. In this case, some laboratory investigation have been undertaken to characterize the waterproofing overspray emission rate. A classical two-zone model was used to assess the aerosol dispersion in the near and far field during spraying. Experiments were also carried out to better understand the processes of emission and dispersion for tracer compounds, focusing on the characterization of near field exposure. An experimental set-up has been developed to perform simultaneous measurements through direct reading instruments in several points. It was mainly found that from a statistical point of view, the compartmental theory makes sense but the attribution to a given compartment could ñó~be done by simple geometric consideration. In a further step the experimental data were completed by observations made in about 100 different workplaces, including exposure measurements and observation of predefined determinants. The various data obtained have been used to improve an existing twocompartment exposure model. A tool was developed to include specific determinants in the choice of the compartment, thus largely improving the reliability of the predictions. All these investigations helped improving our understanding of modelling tools and identify their limitations. The integration of more accessible determinants, which are in accordance with experts needs, may indeed enhance model application for field practice. Moreover, while increasing the quality of modelling tool, this research will not only encourage their systematic use, but might also improve the conditions in which the expert judgments take place, and therefore the workers `health protection.
Resumo:
In this paper, we present and apply a new three-dimensional model for the prediction of canopy-flow and turbulence dynamics in open-channel flow. The approach uses a dynamic immersed boundary technique that is coupled in a sequentially staggered manner to a large eddy simulation. Two different biomechanical models are developed depending on whether the vegetation is dominated by bending or tensile forces. For bending plants, a model structured on the Euler-Bernoulli beam equation has been developed, whilst for tensile plants, an N-pendula model has been developed. Validation against flume data shows good agreement and demonstrates that for a given stem density, the models are able to simulate the extraction of energy from the mean flow at the stem-scale which leads to the drag discontinuity and associated mixing layer.
Resumo:
Background: It is suggested that a low dose of valganciclovir can be equally effective than a standard dose for cytomegalovirus (CMV) prophylaxis after kidney transplantation. The aim of our study was to determine the ganciclovir exposure observed under a routine daily dosage of 450 mg valganciclovir in kidney transplant recipients with a wide range of renal function. Methods: In this prospective study, kidney transplant recipients with a GFR MDRD above 25 mL/min at risk for CMV (donor or recipient seropositive for CMV) received a dose of valganciclovir (450 mg daily) prophylaxis for 3 months. Ganciclovir levels at trough (Ctrough) and at peak (C3h) were measured monthly. Ganciclovir exposure (AUC0-24) was estimated using Bayesian non-linear mixed-effect modelling (NONMEM) and compared between 3 groups of patients according to their kidney function: GFRMDRD 26-39 mL/min (Group 1), GFRMDRD 40-59 mL/min (Group 2) and GFRMDRD 60-90 mL/min (Group 3). CMV DNAemia was assessed during and after prophylaxis using PCR. Results: Thirty-six patients received 450 mg daily of valganciclovir for 3 months. Median ganciclovir C3h was 3.9 mg/L (range: 1.3-7.1) and Ctrough was 0.4 mg/L (range 0.1-2.7). Median (range) AUC0-24 of ganciclovir was 59.3 mg.h/L (39.0-85.3) in Group 1 patients, 35.8 mg.h/L (24.9-55.8) in Group 2 patients and 29.6 mg.h/L (22.0- 43.2) in Group 3 patients (p<0.001). Anemia was more common in Group 1 patients compared to patients on the other groups (p=0.01). No differences in other adverse events according to ganciclovir exposure were observed. CMV DNAemia was not detected during prophylaxis. After discontinuing prophylaxis, CMV DNAemia was seen in 8/34 patients (23.5%) and 4/36 patients (11%) developed CMV disease. Conclusion: A routine dosage of valganciclovir achieved plasma levels of ganciclovir in patients with GFR>60 mL/min similar to those previously reported using oral ganciclovir. A daily dose of 450 mg valganciclovir appears to be acceptable for CMV prophylaxis in most kidney transplant recipients.