998 resultados para Variability Modeling
Resumo:
This work is focused on the analysis of sea–level change (last century), based mainly on instrumental observations. During this period, individual components of sea–level change are investigated, both at global and regional scales. Some of the geophysical processes responsible for current sea-level change such as glacial isostatic adjustments and current melting terrestrial ice sources, have been modeled and compared with observations. A new value of global mean sea level change based of tide gauges observations has been independently assessed in 1.5 mm/year, using corrections for glacial isostatic adjustment obtained with different models as a criterion for the tide gauge selection. The long wavelength spatial variability of the main components of sea–level change has been investigated by means of traditional and new spectral methods. Complex non–linear trends and abrupt sea–level variations shown by tide gauges records have been addressed applying different approaches to regional case studies. The Ensemble Empirical Mode Decomposition technique has been used to analyse tide gauges records from the Adriatic Sea to ascertain the existence of cyclic sea-level variations. An Early Warning approach have been adopted to detect tipping points in sea–level records of North East Pacific and their relationship with oceanic modes. Global sea–level projections to year 2100 have been obtained by a semi-empirical approach based on the artificial neural network method. In addition, a model-based approach has been applied to the case of the Mediterranean Sea, obtaining sea-level projection to year 2050.
Resumo:
Ocular anatomy and radiation-associated toxicities provide unique challenges for external beam radiation therapy. For treatment planning, precise modeling of organs at risk and tumor volume are crucial. Development of a precise eye model and automatic adaptation of this model to patients' anatomy remain problematic because of organ shape variability. This work introduces the application of a 3-dimensional (3D) statistical shape model as a novel method for precise eye modeling for external beam radiation therapy of intraocular tumors.
Resumo:
Traffic particle concentrations show considerable spatial variability within a metropolitan area. We consider latent variable semiparametric regression models for modeling the spatial and temporal variability of black carbon and elemental carbon concentrations in the greater Boston area. Measurements of these pollutants, which are markers of traffic particles, were obtained from several individual exposure studies conducted at specific household locations as well as 15 ambient monitoring sites in the city. The models allow for both flexible, nonlinear effects of covariates and for unexplained spatial and temporal variability in exposure. In addition, the different individual exposure studies recorded different surrogates of traffic particles, with some recording only outdoor concentrations of black or elemental carbon, some recording indoor concentrations of black carbon, and others recording both indoor and outdoor concentrations of black carbon. A joint model for outdoor and indoor exposure that specifies a spatially varying latent variable provides greater spatial coverage in the area of interest. We propose a penalised spline formation of the model that relates to generalised kringing of the latent traffic pollution variable and leads to a natural Bayesian Markov Chain Monte Carlo algorithm for model fitting. We propose methods that allow us to control the degress of freedom of the smoother in a Bayesian framework. Finally, we present results from an analysis that applies the model to data from summer and winter separately
Resumo:
This paper proposes Poisson log-linear multilevel models to investigate population variability in sleep state transition rates. We specifically propose a Bayesian Poisson regression model that is more flexible, scalable to larger studies, and easily fit than other attempts in the literature. We further use hierarchical random effects to account for pairings of individuals and repeated measures within those individuals, as comparing diseased to non-diseased subjects while minimizing bias is of epidemiologic importance. We estimate essentially non-parametric piecewise constant hazards and smooth them, and allow for time varying covariates and segment of the night comparisons. The Bayesian Poisson regression is justified through a re-derivation of a classical algebraic likelihood equivalence of Poisson regression with a log(time) offset and survival regression assuming piecewise constant hazards. This relationship allows us to synthesize two methods currently used to analyze sleep transition phenomena: stratified multi-state proportional hazards models and log-linear models with GEE for transition counts. An example data set from the Sleep Heart Health Study is analyzed.
Resumo:
The goals of the present study were to model the population kinetics of in vivo influx and efflux processes of grepafloxacin at the serum-cerebrospinal fluid (CSF) barrier and to propose a simulation-based approach to optimize the design of dose-finding trials in the meningitis rabbit model. Twenty-nine rabbits with pneumococcal meningitis receiving grepafloxacin at 15 mg/kg of body weight (intravenous administration at 0 h), 30 mg/kg (at 0 h), or 50 mg/kg twice (at 0 and 4 h) were studied. A three-compartment population pharmacokinetic model was fit to the data with the program NONMEM (Nonlinear Mixed Effects Modeling). Passive diffusion clearance (CL(diff)) and active efflux clearance (CL(active)) are transfer kinetic modeling parameters. Influx clearance is assumed to be equal to CL(diff), and efflux clearance is the sum of CL(diff), CL(active), and bulk flow clearance (CL(bulk)). The average influx clearance for the population was 0.0055 ml/min (interindividual variability, 17%). Passive diffusion clearance was greater in rabbits receiving grepafloxacin at 15 mg/kg than in those treated with higher doses (0.0088 versus 0.0034 ml/min). Assuming a CL(bulk) of 0.01 ml/min, CL(active) was estimated to be 0.017 ml/min (11%), and clearance by total efflux was estimated to be 0.032 ml/min. The population kinetic model allows not only to quantify in vivo efflux and influx mechanisms at the serum-CSF barrier but also to analyze the effects of different dose regimens on transfer kinetic parameters in the rabbit meningitis model. The modeling-based approach also provides a tool for the simulation and prediction of various outcomes in which researchers might be interested, which is of great potential in designing dose-finding trials.
Resumo:
Riparian zones are dynamic, transitional ecosystems between aquatic and terrestrial ecosystems with well defined vegetation and soil characteristics. Development of an all-encompassing definition for riparian ecotones, because of their high variability, is challenging. However, there are two primary factors that all riparian ecotones are dependent on: the watercourse and its associated floodplain. Previous approaches to riparian boundary delineation have utilized fixed width buffers, but this methodology has proven to be inadequate as it only takes the watercourse into consideration and ignores critical geomorphology, associated vegetation and soil characteristics. Our approach offers advantages over other previously used methods by utilizing: the geospatial modeling capabilities of ArcMap GIS; a better sampling technique along the water course that can distinguish the 50-year flood plain, which is the optimal hydrologic descriptor of riparian ecotones; the Soil Survey Database (SSURGO) and National Wetland Inventory (NWI) databases to distinguish contiguous areas beyond the 50-year plain; and land use/cover characteristics associated with the delineated riparian zones. The model utilizes spatial data readily available from Federal and State agencies and geospatial clearinghouses. An accuracy assessment was performed to assess the impact of varying the 50-year flood height, changing the DEM spatial resolution (1, 3, 5 and 10m), and positional inaccuracies with the National Hydrography Dataset (NHD) streams layer on the boundary placement of the delineated variable width riparian ecotones area. The result of this study is a robust and automated GIS based model attached to ESRI ArcMap software to delineate and classify variable-width riparian ecotones.
Resumo:
This work presents a 1-D process scale model used to investigate the chemical dynamics and temporal variability of nitrogen oxides (NOx) and ozone (O3) within and above snowpack at Summit, Greenland for March-May 2009 and estimates surface exchange of NOx between the snowpack and surface layer in April-May 2009. The model assumes the surface of snowflakes have a Liquid Like Layer (LLL) where aqueous chemistry occurs and interacts with the interstitial air of the snowpack. Model parameters and initialization are physically and chemically representative of snowpack at Summit, Greenland and model results are compared to measurements of NOx and O3 collected by our group at Summit, Greenland from 2008-2010. The model paired with measurements confirmed the main hypothesis in literature that photolysis of nitrate on the surface of snowflakes is responsible for nitrogen dioxide (NO2) production in the top ~50 cm of the snowpack at solar noon for March – May time periods in 2009. Nighttime peaks of NO2 in the snowpack for April and May were reproduced with aqueous formation of peroxynitric acid (HNO4) in the top ~50 cm of the snowpack with subsequent mass transfer to the gas phase, decomposition to form NO2 at nighttime, and transportation of the NO2 to depths of 2 meters. Modeled production of HNO4 was hindered in March 2009 due to the low production of its precursor, hydroperoxy radical, resulting in underestimation of nighttime NO2 in the snowpack for March 2009. The aqueous reaction of O3 with formic acid was the major sync of O3 in the snowpack for March-May, 2009. Nitrogen monoxide (NO) production in the top ~50 cm of the snowpack is related to the photolysis of NO2, which underrepresents NO in May of 2009. Modeled surface exchange of NOx in April and May are on the order of 1011 molecules m-2 s-1. Removal of measured downward fluxes of NO and NO2 in measured fluxes resulted in agreement between measured NOx fluxes and modeled surface exchange in April and an order of magnitude deviation in May. Modeled transport of NOx above the snowpack in May shows an order of magnitude increase of NOx fluxes in the first 50 cm of the snowpack and is attributed to the production of NO2 during the day from the thermal decomposition and photolysis of peroxynitric acid with minor contributions of NO from HONO photolysis in the early morning.
Resumo:
Argininosuccinic aciduria (ASA) is an autosomal recessive urea cycle disorder caused by deficiency of argininosuccinate lyase (ASL) with a wide clinical spectrum from asymptomatic to severe hyperammonemic neonatal onset life-threatening courses. We investigated the role of ASL transcript variants in the clinical and biochemical variability of ASA. Recombinant proteins for ASL wild type, mutant p.E189G, and the frequently occurring transcript variants with exon 2 or 7 deletions were (co-)expressed in human embryonic kidney 293T cells. We found that exon 2-deleted ASL forms a stable truncated protein with no relevant activity but a dose-dependent dominant negative effect on enzymatic activity after co-expression with wild type or mutant ASL, whereas exon 7-deleted ASL is unstable but seems to have, nevertheless, a dominant negative effect on mutant ASL. These findings were supported by structural modeling predictions for ASL heterotetramer/homotetramer formation. Illustrating the physiological relevance, the predominant occurrence of exon 7-deleted ASL was found in two patients who were both heterozygous for the ASL mutant p.E189G. Our results suggest that ASL transcripts can contribute to the highly variable phenotype in ASA patients if expressed at high levels. Especially, the exon 2-deleted ASL variant may form a heterotetramer with wild type or mutant ASL, causing markedly reduced ASL activity.
Resumo:
By means of fixed-links modeling, the present study identified different processes of visual short-term memory (VSTM) functioning and investigated how these processes are related to intelligence. We conducted an experiment where the participants were presented with a color change detection task. Task complexity was manipulated through varying the number of presented stimuli (set size). We collected hit rate and reaction time (RT) as indicators for the amount of information retained in VSTM and speed of VSTM scanning, respectively. Due to the impurity of these measures, however, the variability in hit rate and RT was assumed to consist not only of genuine variance due to individual differences in VSTM retention and VSTM scanning but also of other, non-experimental portions of variance. Therefore, we identified two qualitatively different types of components for both hit rate and RT: (1) non-experimental components representing processes that remained constant irrespective of set size and (2) experimental components reflecting processes that increased as a function of set size. For RT, intelligence was negatively associated with the non-experimental components, but was unrelated to the experimental components assumed to represent variability in VSTM scanning speed. This finding indicates that individual differences in basic processing speed, rather than in speed of VSTM scanning, differentiates between high- and low-intelligent individuals. For hit rate, the experimental component constituting individual differences in VSTM retention was positively related to intelligence. The non-experimental components of hit rate, representing variability in basal processes, however, were not associated with intelligence. By decomposing VSTM functioning into non-experimental and experimental components, significant associations with intelligence were revealed that otherwise might have been obscured.
Resumo:
As an initial step in establishing mechanistic relationships between environmental variability and recruitment in Atlantic cod Gadhus morhua along the coast of the western Gulf of Maine, we assessed transport success of larvae from major spawning grounds to nursery areas with particle tracking using the unstructured grid model FVCOM (finite volume coastal ocean model). In coastal areas, dispersal of early planktonic life stages of fish and invertebrate species is highly dependent on the regional dynamics and its variability, which has to be captured by our models. With state-of-the-art forcing for the year 1995, we evaluate the sensitivity of particle dispersal to the timing and location of spawning, the spatial and temporal resolution of the model, and the vertical mixing scheme. A 3 d frequency for the release of particles is necessary to capture the effect of the circulation variability into an averaged dispersal pattern of the spawning season. The analysis of sensitivity to model setup showed that a higher resolution mesh, tidal forcing, and current variability do not change the general pattern of connectivity, but do tend to increase within-site retention. Our results indicate strong downstream connectivity among spawning grounds and higher chances for successful transport from spawning areas closer to the coast. The model run for January egg release indicates 1 to 19 % within-spawning ground retention of initial particles, which may be sufficient to sustain local populations. A systematic sensitivity analysis still needs to be conducted to determine the minimum mesh and forcing resolution that adequately resolves the complex dynamics of the western Gulf of Maine. Other sources of variability, i.e. large-scale upstream forcing and the biological environment, also need to be considered in future studies of the interannual variability in transport and survival of the early life stages of cod.
Resumo:
Approximate models (proxies) can be employed to reduce the computational costs of estimating uncertainty. The price to pay is that the approximations introduced by the proxy model can lead to a biased estimation. To avoid this problem and ensure a reliable uncertainty quantification, we propose to combine functional data analysis and machine learning to build error models that allow us to obtain an accurate prediction of the exact response without solving the exact model for all realizations. We build the relationship between proxy and exact model on a learning set of geostatistical realizations for which both exact and approximate solvers are run. Functional principal components analysis (FPCA) is used to investigate the variability in the two sets of curves and reduce the dimensionality of the problem while maximizing the retained information. Once obtained, the error model can be used to predict the exact response of any realization on the basis of the sole proxy response. This methodology is purpose-oriented as the error model is constructed directly for the quantity of interest, rather than for the state of the system. Also, the dimensionality reduction performed by FPCA allows a diagnostic of the quality of the error model to assess the informativeness of the learning set and the fidelity of the proxy to the exact model. The possibility of obtaining a prediction of the exact response for any newly generated realization suggests that the methodology can be effectively used beyond the context of uncertainty quantification, in particular for Bayesian inference and optimization.
Resumo:
SeaWiFS (Sea-viewing Wide Field-of-view Sensor) chlorophyll data revealed strong interannual variability in fall phytoplankton dynamics in the Gulf of Maine, with 3 general features in any one year: (1) rapid chlorophyll increases in response to storm events in fall; (2) gradual chlorophyll increases in response to seasonal wind-and cooling-induced mixing that gradually deepens the mixed layer; and (3) the absence of any observable fall bloom. We applied a mixed-layer box model and a 1-dimensional physical-biological numerical model to examine the influence of physical forcing (surface wind, heat flux, and freshening) on the mixed-layer dynamics and its impact on the entrainment of deep-water nutrients and thus on the appearance of fall bloom. The model results suggest that during early fall, the surface mixed-layer depth is controlled by both wind-and cooling-induced mixing. Strong interannual variability in mixed-layer depth has a direct impact on short-and long-term vertical nutrient fluxes and thus the fall bloom. Phytoplankton concentrations over time are sensitive to initial pre-bloom profiles of nutrients. The strength of the initial stratification can affect the modeled phytoplankton concentration, while the timing of intermittent freshening events is related to the significant interannual variability of fall blooms.
Resumo:
Net primary production (NPP) is commonly modeled as a function of chlorophyll concentration (Chl), even though it has been long recognized that variability in intracellular chlorophyll content from light acclimation and nutrient stress confounds the relationship between Chl and phytoplankton biomass. It was suggested previously that satellite estimates of backscattering can be related to phytoplankton carbon biomass (C) under conditions of a conserved particle size distribution or a relatively stable relationship between C and total particulate organic carbon. Together, C and Chl can be used to describe physiological state (through variations in Chl:C ratios) and NPP. Here, we fully develop the carbon-based productivity model (CbPM) to include information on the subsurface light field and nitracline depths to parameterize photoacclimation and nutrient stress throughout the water column. This depth-resolved approach produces profiles of biological properties (Chl, C, NPP) that are broadly consistent with observations. The CbPM is validated using regional in situ data sets of irradiance-derived products, phytoplankton chlorophyll: carbon ratios, and measured NPP rates. CbPM-based distributions of global NPP are significantly different in both space and time from previous Chl-based estimates because of the distinction between biomass and physiological influences on global Chl fields. The new model yields annual, areally integrated water column production of similar to 52 Pg C a(-1) for the global oceans.
Resumo:
A critical problem in radiocarbon dating is the spatial and temporal variability of marine reservoir ages (MRAs). We assessed the MRA evolution during the last deglaciation by numerical modeling, applying a self-consistent iteration scheme in which an existing radiocarbon chronology (derived by Hughen et al., Quat. Sci. Rev., 25, pp. 3216-3227, 2006) was readjusted by transient, 3-D simulations of marine and atmospheric Delta14C. To estimate the uncertainties regarding the ocean ventilation during the last deglaciation, we considered various ocean overturning scenarios which are based on different climatic background states (PD: modern climate, GS: LGM climate conditions). Minimum and maximum MRAs are included in file 'MRAminmax_21-14kaBP.nc'. Three further files include MRAs according to equilibrium simulations of the preindustrial ocean (file 'C14age_preindustrial.nc'; this is an update of our results published in 2005) and of the glacial ocean (files 'C14age_spinupLGM_GS.nc' and 'C14age_spinupLGM_PD.nc').