971 resultados para 2-compartment Dispersion Model
Resumo:
The long-term adverse effects on health associated with air pollution exposure can be estimated using either cohort or spatio-temporal ecological designs. In a cohort study, the health status of a cohort of people are assessed periodically over a number of years, and then related to estimated ambient pollution concentrations in the cities in which they live. However, such cohort studies are expensive and time consuming to implement, due to the long-term follow up required for the cohort. Therefore, spatio-temporal ecological studies are also being used to estimate the long-term health effects of air pollution as they are easy to implement due to the routine availability of the required data. Spatio-temporal ecological studies estimate the health impact of air pollution by utilising geographical and temporal contrasts in air pollution and disease risk across $n$ contiguous small-areas, such as census tracts or electoral wards, for multiple time periods. The disease data are counts of the numbers of disease cases occurring in each areal unit and time period, and thus Poisson log-linear models are typically used for the analysis. The linear predictor includes pollutant concentrations and known confounders such as socio-economic deprivation. However, as the disease data typically contain residual spatial or spatio-temporal autocorrelation after the covariate effects have been accounted for, these known covariates are augmented by a set of random effects. One key problem in these studies is estimating spatially representative pollution concentrations in each areal which are typically estimated by applying Kriging to data from a sparse monitoring network, or by computing averages over modelled concentrations (grid level) from an atmospheric dispersion model. The aim of this thesis is to investigate the health effects of long-term exposure to Nitrogen Dioxide (NO2) and Particular matter (PM10) in mainland Scotland, UK. In order to have an initial impression about the air pollution health effects in mainland Scotland, chapter 3 presents a standard epidemiological study using a benchmark method. The remaining main chapters (4, 5, 6) cover the main methodological focus in this thesis which has been threefold: (i) how to better estimate pollution by developing a multivariate spatio-temporal fusion model that relates monitored and modelled pollution data over space, time and pollutant; (ii) how to simultaneously estimate the joint effects of multiple pollutants; and (iii) how to allow for the uncertainty in the estimated pollution concentrations when estimating their health effects. Specifically, chapters 4 and 5 are developed to achieve (i), while chapter 6 focuses on (ii) and (iii). In chapter 4, I propose an integrated model for estimating the long-term health effects of NO2, that fuses modelled and measured pollution data to provide improved predictions of areal level pollution concentrations and hence health effects. The air pollution fusion model proposed is a Bayesian space-time linear regression model for relating the measured concentrations to the modelled concentrations for a single pollutant, whilst allowing for additional covariate information such as site type (e.g. roadside, rural, etc) and temperature. However, it is known that some pollutants might be correlated because they may be generated by common processes or be driven by similar factors such as meteorology. The correlation between pollutants can help to predict one pollutant by borrowing strength from the others. Therefore, in chapter 5, I propose a multi-pollutant model which is a multivariate spatio-temporal fusion model that extends the single pollutant model in chapter 4, which relates monitored and modelled pollution data over space, time and pollutant to predict pollution across mainland Scotland. Considering that we are exposed to multiple pollutants simultaneously because the air we breathe contains a complex mixture of particle and gas phase pollutants, the health effects of exposure to multiple pollutants have been investigated in chapter 6. Therefore, this is a natural extension to the single pollutant health effects in chapter 4. Given NO2 and PM10 are highly correlated (multicollinearity issue) in my data, I first propose a temporally-varying linear model to regress one pollutant (e.g. NO2) against another (e.g. PM10) and then use the residuals in the disease model as well as PM10, thus investigating the health effects of exposure to both pollutants simultaneously. Another issue considered in chapter 6 is to allow for the uncertainty in the estimated pollution concentrations when estimating their health effects. There are in total four approaches being developed to adjust the exposure uncertainty. Finally, chapter 7 summarises the work contained within this thesis and discusses the implications for future research.
Resumo:
This paper presents the general framework of an ecological model of the English Channel. The model is a result of combining a physical sub-model with a biological one. in the physical submodel, the Channel is divided into 71 boxes and water fluxes between them are calculated automatically. A 2-layer, vertical thermohaline model was then linked with the horizontal circulation scheme. This physical sub-model exhibits thermal stratification in the western Channel during spring and summer and haline stratification in the Bay of Seine due to high flow rates from the river. The biological sub-model takes 2 elements, nitrogen and silicon, into account and divides phytoplankton into diatoms and dinoflagellates. Results from this ecological model emphasize the influence of stratification on chlorophyll a concentrations as well as on primary production. Stratified waters appear to be much less productive than well-mixed ones. Nevertheless, when simulated production values are compared with literature data, calculated production is shown to be underestimated. This could be attributed to a lack of refinement of the 2-layer box-model or processes omitted from the biological model, such as production by nanoplankton.
Resumo:
Social network sites (SNS), such as Facebook, Google+ and Twitter, have attracted hundreds of millions of users daily since their appearance. Within SNS, users connect to each other, express their identity, disseminate information and form cooperation by interacting with their connected peers. The increasing popularity and ubiquity of SNS usage and the invaluable user behaviors and connections give birth to many applications and business models. We look into several important problems within the social network ecosystem. The first one is the SNS advertisement allocation problem. The other two are related to trust mechanisms design in social network setting, including local trust inference and global trust evaluation. In SNS advertising, we study the problem of advertisement allocation from the ad platform's angle, and discuss its differences with the advertising model in the search engine setting. By leveraging the connection between social networks and hyperbolic geometry, we propose to solve the problem via approximation using hyperbolic embedding and convex optimization. A hyperbolic embedding method, \hcm, is designed for the SNS ad allocation problem, and several components are introduced to realize the optimization formulation. We show the advantages of our new approach in solving the problem compared to the baseline integer programming (IP) formulation. In studying the problem of trust mechanisms in social networks, we consider the existence of distrust (i.e. negative trust) relationships, and differentiate between the concept of local trust and global trust in social network setting. In the problem of local trust inference, we propose a 2-D trust model. Based on the model, we develop a semiring-based trust inference framework. In global trust evaluation, we consider a general setting with conflicting opinions, and propose a consensus-based approach to solve the complex problem in signed trust networks.
Resumo:
Energy Conservation Measure (ECM) project selection is made difficult given real-world constraints, limited resources to implement savings retrofits, various suppliers in the market and project financing alternatives. Many of these energy efficient retrofit projects should be viewed as a series of investments with annual returns for these traditionally risk-averse agencies. Given a list of ECMs available, federal, state and local agencies must determine how to implement projects at lowest costs. The most common methods of implementation planning are suboptimal relative to cost. Federal, state and local agencies can obtain greater returns on their energy conservation investment over traditional methods, regardless of the implementing organization. This dissertation outlines several approaches to improve the traditional energy conservations models. Any public buildings in regions with similar energy conservation goals in the United States or internationally can also benefit greatly from this research. Additionally, many private owners of buildings are under mandates to conserve energy e.g., Local Law 85 of the New York City Energy Conservation Code requires any building, public or private, to meet the most current energy code for any alteration or renovation. Thus, both public and private stakeholders can benefit from this research. The research in this dissertation advances and presents models that decision-makers can use to optimize the selection of ECM projects with respect to the total cost of implementation. A practical application of a two-level mathematical program with equilibrium constraints (MPEC) improves the current best practice for agencies concerned with making the most cost-effective selection leveraging energy services companies or utilities. The two-level model maximizes savings to the agency and profit to the energy services companies (Chapter 2). An additional model presented leverages a single congressional appropriation to implement ECM projects (Chapter 3). Returns from implemented ECM projects are used to fund additional ECM projects. In these cases, fluctuations in energy costs and uncertainty in the estimated savings severely influence ECM project selection and the amount of the appropriation requested. A risk aversion method proposed imposes a minimum on the number of “of projects completed in each stage. A comparative method using Conditional Value at Risk is analyzed. Time consistency was addressed in this chapter. This work demonstrates how a risk-based, stochastic, multi-stage model with binary decision variables at each stage provides a much more accurate estimate for planning than the agency’s traditional approach and deterministic models. Finally, in Chapter 4, a rolling-horizon model allows for subadditivity and superadditivity of the energy savings to simulate interactive effects between ECM projects. The approach makes use of inequalities (McCormick, 1976) to re-express constraints that involve the product of binary variables with an exact linearization (related to the convex hull of those constraints). This model additionally shows the benefits of learning between stages while remaining consistent with the single congressional appropriations framework.
Resumo:
Este trabalho apresenta um modelo gravimétrico 2,5D gerado a partir de 315 novas estações gravimétricas levantadas ao longo de uma seção transversal NW-SE com 750 km de extensão na porção setentrional da Província Borborema, NE do Brasil. A modelagem gravimétrica foi aplicada separadamente nas componentes regional e residual do campo gravitacional. A modelagem 2,5D das anomalias regionais revelou que a profundidade da interface crosta-manto varia de 28 a 32 km, considerando uma densidade média de 2,8 g/cm3 para a crosta continental e de 3,3 g/cm3 para o manto litosférico. As anomalias residuais de alta frequência foram interpretadas a partir do contraste de densidade da crosta superior, com uma espessura não superior a 10 km, e uma ampla associação litológica, com densidades variando de 2,55 a 2,9 g/cm3. A configuração geotectônica litosférica atual da Província Borborema é claramente resultado da ruptura dos continentes Sul-Americano e Africano no Mesozoico, na qual boa parte dos vestígios das estruturas tectônicas de grandes profundidades formadas durante a orogênese Brasiliana/Pan-Africana foi mascarada pelo último episódio tectônico responsável pela fragmentação do Gondwana Ocidental. _______________________________________________________________________________________ ABSTRACT
Resumo:
In this study, the lubrication theory is used to model flow in geological fractures and analyse the compound effect of medium heterogeneity and complex fluid rheology. Such studies are warranted as the Newtonian rheology is adopted in most numerical models because of its ease of use, despite non-Newtonian fluids being ubiquitous in subsurface applications. Past studies on Newtonian and non-Newtonian flow in single rock fractures are summarized in Chapter 1. Chapter 2 presents analytical and semi-analytical conceptual models for flow of a shear-thinning fluid in rock fractures having a simplified geometry, providing a first insight on their permeability. in Chapter 3, a lubrication-based 2-D numerical model is first implemented to solve flow of an Ellis fluid in rough fractures; the finite-volumes model developed is more computationally effective than conducting full 3-D simulations, and introduces an acceptable approximation as long as the flow is laminar and the fracture walls relatively smooth. The compound effect of shear-thinning fluid nature and fracture heterogeneity promotes flow localization, which in turn affects the performance of industrial activities and remediation techniques. In Chapter 4, a Monte Carlo framework is adopted to produce multiple realizations of synthetic fractures, and analyze their ensemble statistics pertaining flow for a variety of real non-Newtonian fluids; the Newtonian case is used as a benchmark. In Chapter 5 and Chapter 6, a conceptual model of the hydro-mechanical aspects of backflow occurring in the last phase of hydraulic fracturing is proposed and experimentally validated, quantifying the effects of the relaxation induced by the flow.
Resumo:
Air pollution is one of the greatest health risks in the world. At the same time, the strong correlation with climate change, as well as with Urban Heat Island and Heat Waves, make more intense the effects of all these phenomena. A good air quality and high levels of thermal comfort are the big goals to be reached in urban areas in coming years. Air quality forecast help decision makers to improve air quality and public health strategies, mitigating the occurrence of acute air pollution episodes. Air quality forecasting approaches combine an ensemble of models to provide forecasts from global to regional air pollution and downscaling for selected countries and regions. The development of models dedicated to urban air quality issues requires a good set of data regarding the urban morphology and building material characteristics. Only few examples of air quality forecast system at urban scale exist in the literature and often they are limited to selected cities. This thesis develops by setting up a methodology for the development of a forecasting tool. The forecasting tool can be adapted to all cities and uses a new parametrization for vegetated areas. The parametrization method, based on aerodynamic parameters, produce the urban spatially varying roughness. At the core of the forecasting tool there is a dispersion model (urban scale) used in forecasting mode, and the meteorological and background concentration forecasts provided by two regional numerical weather forecasting models. The tool produces the 1-day spatial forecast of NO2, PM10, O3 concentration, the air temperature, the air humidity and BLQ-Air index values. The tool is automatized to run every day, the maps produced are displayed on the e-Globus platform, updated every day. The results obtained indicate that the forecasting output were in good agreement with the observed measurements.
Resumo:
BACKGROUND CONTEXT In canine intervertebral disc (IVD) disease, a useful animal model, only little is known about the inflammatory response in the epidural space. PURPOSE To determine messenger RNA (mRNA) expressions of selected cytokines, chemokines, and matrix metalloproteinases (MMPs) qualitatively and semiquantitatively over the course of the disease and to correlate results to neurologic status and outcome. STUDY DESIGN/SETTING Prospective study using extruded IVD material of dogs with thoracolumbar IVD extrusion. PATIENT SAMPLE Seventy affected and 13 control (24 samples) dogs. OUTCOME MEASURES Duration of neurologic signs, pretreatment, neurologic grade, severity of pain, and outcome were recorded. After diagnostic imaging, decompressive surgery was performed. METHODS Messenger RNA expressions of interleukin (IL)-1β, IL-2, IL-4, IL-6, IL-8, IL-10, tumor necrosis factor (TNF), interferon (IFN)γ, MMP-2, MMP-9, chemokine ligand (CCL)2, CCL3, and three housekeeping genes was determined in the collected epidural material by Panomics 2.0 QuantiGene Plex technology. Relative mRNA expression and fold changes were calculated. Relative mRNA expression was correlated statistically to clinical parameters. RESULTS Fold changes of TNF, IL-1β, IL-2, IL-4, IL-6, IL-10, IFNγ, and CCL3 were clearly downregulated in all stages of the disease. MMP-9 was downregulated in the acute stage and upregulated in the subacute and chronic phase. Interleukin-8 was upregulated in acute cases. MMP-2 showed mild and CCL2 strong upregulation over the whole course of the disease. In dogs with severe pain, CCL3 and IFNγ were significantly higher compared with dogs without pain (p=.017/.020). Dogs pretreated with nonsteroidal anti-inflammatory drugs revealed significantly lower mRNA expression of IL-8 (p=.017). CONCLUSIONS The high CCL2 levels and upregulated MMPs combined with downregulated T-cell cytokines and suppressed pro-inflammatory genes in extruded canine disc material indicate that the epidural reaction is dominated by infiltrating monocytes differentiating into macrophages with tissue remodeling functions. These results will help to understand the pathogenic processes representing the basis for novel therapeutic approaches. The canine IVD disease model will be rewarding in this process.
Resumo:
Prepared for Office, Chief of Engineers, U.S. Army, Washington, D.C.
Resumo:
Objective: To investigate the association of different types of magnetic resonance imaging (MRI)-detected medial meniscal pathology with subregional cartilage loss in the medial tibiofemoral compartment. Methods: A total of 152 women aged >= 40 years, with and without knee osteoarthritis (OA) were included in a longitudinal 24-month observational study. Spoiled gradient recalled acquisitions at steady state (SPGR) and T2-weighted fat-suppressed MRI sequences were acquired. Medial meniscal status of the anterior horn (AH), body, and posterior horn (PH) was graded at baseline: 0 (normal), 1 (intrasubstance meniscal signal changes), 2 (single tears), and 3 (complex tears/maceration). Cartilage segmentation was performed at baseline and 24-month follow-up in various tibiofemoral subregions using computation software. Multiple linear regression models were applied for the analysis with cartilage loss as the outcome. In a first model, the results were adjusted for age and body mass index (BMI). In a second model, the results were adjusted for age, BMI and medial meniscal extrusion. Results: After adjusting for age, BMI, and medial meniscal extrusion, cartilage loss in the total medial tibia (MT) (0.04 mm, P=0.04) and the external medial tibia (eMT) (0.068 mm, P=0.04) increased significantly for compartments with grade 3 lesions. Cartilage loss in the total central medial femoral condyle (cMF) (0.071 mm, P=0.03) also increased significantly for compartments with grade 2 lesions. Cartilage loss at the eMT was significantly related to tears of the PH (0.074 mm; P=0.03). Cartilage loss was not significantly increased for compartments with grade 1 lesions. Conclusion: The protective function of the meniscus appears to be preserved in the presence of intrasubstance meniscal signal changes. Prevalent single tears and meniscal maceration were found to be associated with increased cartilage loss in the same compartment, especially at the PH. (C) 2009 Osteoarthritis Research Society International. Published by Elsevier Ltd. All rights reserved.
Resumo:
Diabetes mellitus (DM) is a major cause of peripheral neuropathy. More than 220 million people worldwide suffer from type 2 DM, which will, in approximately half of them, lead to the development of diabetic peripheral neuropathy. While of significant medical importance, the pathophysiological changes present in DPN are still poorly understood. To get more insight into DPN associated with type 2 DM, we decided to use the rodent model of this form of diabetes, the db/db mice. During the in-vivo conduction velocity studies on these animals, we observed the presence of multiple spiking followed by a single stimulation. This prompted us to evaluate the excitability properties of db/db peripheral nerves. Ex-vivo electrophysiological evaluation revealed a significant increase in the excitability of db/db sciatic nerves. While the shape and kinetics of the compound action potential of db/db nerves were the same as for control nerves, we observed an increase in the after-hyperpolarization phase (AHP) under diabetic conditions. Using pharmacological inhibitors we demonstrated that both the peripheral nerve hyperexcitability (PNH) and the increased AHP were mostly mediated by the decreased activity of Kv1-channels. Importantly, we corroborated these data at the molecular level. We observed a strong reduction of Kv1.2 channel presence in the juxtaparanodal regions of teased fibers in db/db mice as compared to control mice. Quantification of the amount of both Kv1.2 isoforms in DRG neurons and in the endoneurial compartment of peripheral nerve by Western blotting revealed that less mature Kv1.2 was integrated into the axonal membranes at the juxtaparanodes. Our observation that peripheral nerve hyperexcitability present in db/db mice is at least in part a consequence of changes in potassium channel distribution suggests that the same mechanism also mediates PNH in diabetic patients. ∗Current address: Department of Physiology, UCSF, San Francisco, CA, USA.
Resumo:
To investigate whether the compartment pressure of the rectus sheath (CPRS) reflects the intra-abdominal pressure (IAP) under various conditions of intra-abdominal hypertension (IAH) in a pig model.
Resumo:
The modelling of diffusive terms in particle methods is a delicate matter and several models were proposed in the literature to take such terms into account. The diffusion velocity method (DVM), originally designed for the diffusion of passive scalars, turns diffusive terms into convective ones by expressing them as a divergence involving a so-called diffusion velocity. In this paper, DVM is extended to the diffusion of vectorial quantities in the three-dimensional Navier–Stokes equations, in their incompressible, velocity–vorticity formulation. The integration of a large eddy simulation (LES) turbulence model is investigated and a DVM general formulation is proposed. Either with or without LES, a novel expression of the diffusion velocity is derived, which makes it easier to approximate and which highlights the analogy with the original formulation for scalar transport. From this statement, DVM is then analysed in one dimension, both analytically and numerically on test cases to point out its good behaviour.
Resumo:
The objective of the study was to illustrate the applicability and significance of the novel Lewis urothelial cancer model compared to the classic Fisher 344. Fischer 344 and Lewis females rats, 7 weeks old, were intravesical instilled N-methyl-N-nitrosourea 1.5 mg/kg every other week for a total of four doses. After 15 weeks, animals were sacrificed and bladders analyzed: histopathology (tumor grade and stage), immunohistochemistry (apoptotic and proliferative indices) and blotting (Toll-like receptor 2-TLR2, Uroplakin III-UP III and C-Myc). Control groups received placebo. There were macroscopic neoplastic lesions in 20 % of Lewis strain and 70 % of Fischer 344 strain. Lewis showed hyperplasia in 50 % of animals, normal bladders in 50 %. All Fischer 344 had lesions, 20 % papillary hyperplasia, 30 % dysplasia, 40 % neoplasia and 10 % squamous metaplasia. Proliferative and apoptotic indices were significantly lower in the Lewis strain (p < 0.01). The TLR2 and UP III protein levels were significantly higher in Lewis compared to Fischer 344 strain (70.8 and 46.5 % vs. 49.5 and 16.9 %, respectively). In contrast, C-Myc protein levels were significantly higher in Fischer 344 (22.5 %) compared to Lewis strain (13.7 %). The innovative Lewis carcinogen resistance urothelial model represents a new strategy for translational research. Preservation of TLR2 and UP III defense mechanisms might drive diverse urothelial phenotypes during carcinogenesis in differently susceptible individuals.
Resumo:
We describe an estimation technique for biomass burning emissions in South America based on a combination of remote-sensing fire products and field observations, the Brazilian Biomass Burning Emission Model (3BEM). For each fire pixel detected by remote sensing, the mass of the emitted tracer is calculated based on field observations of fire properties related to the type of vegetation burning. The burnt area is estimated from the instantaneous fire size retrieved by remote sensing, when available, or from statistical properties of the burn scars. The sources are then spatially and temporally distributed and assimilated daily by the Coupled Aerosol and Tracer Transport model to the Brazilian developments on the Regional Atmospheric Modeling System (CATT-BRAMS) in order to perform the prognosis of related tracer concentrations. Three other biomass burning inventories, including GFEDv2 and EDGAR, are simultaneously used to compare the emission strength in terms of the resultant tracer distribution. We also assess the effect of using the daily time resolution of fire emissions by including runs with monthly-averaged emissions. We evaluate the performance of the model using the different emission estimation techniques by comparing the model results with direct measurements of carbon monoxide both near-surface and airborne, as well as remote sensing derived products. The model results obtained using the 3BEM methodology of estimation introduced in this paper show relatively good agreement with the direct measurements and MOPITT data product, suggesting the reliability of the model at local to regional scales.