992 resultados para modeling sensitivity
Resumo:
As an initial step in establishing mechanistic relationships between environmental variability and recruitment in Atlantic cod Gadhus morhua along the coast of the western Gulf of Maine, we assessed transport success of larvae from major spawning grounds to nursery areas with particle tracking using the unstructured grid model FVCOM (finite volume coastal ocean model). In coastal areas, dispersal of early planktonic life stages of fish and invertebrate species is highly dependent on the regional dynamics and its variability, which has to be captured by our models. With state-of-the-art forcing for the year 1995, we evaluate the sensitivity of particle dispersal to the timing and location of spawning, the spatial and temporal resolution of the model, and the vertical mixing scheme. A 3 d frequency for the release of particles is necessary to capture the effect of the circulation variability into an averaged dispersal pattern of the spawning season. The analysis of sensitivity to model setup showed that a higher resolution mesh, tidal forcing, and current variability do not change the general pattern of connectivity, but do tend to increase within-site retention. Our results indicate strong downstream connectivity among spawning grounds and higher chances for successful transport from spawning areas closer to the coast. The model run for January egg release indicates 1 to 19 % within-spawning ground retention of initial particles, which may be sufficient to sustain local populations. A systematic sensitivity analysis still needs to be conducted to determine the minimum mesh and forcing resolution that adequately resolves the complex dynamics of the western Gulf of Maine. Other sources of variability, i.e. large-scale upstream forcing and the biological environment, also need to be considered in future studies of the interannual variability in transport and survival of the early life stages of cod.
Resumo:
We describe an estimation technique for biomass burning emissions in South America based on a combination of remote-sensing fire products and field observations, the Brazilian Biomass Burning Emission Model (3BEM). For each fire pixel detected by remote sensing, the mass of the emitted tracer is calculated based on field observations of fire properties related to the type of vegetation burning. The burnt area is estimated from the instantaneous fire size retrieved by remote sensing, when available, or from statistical properties of the burn scars. The sources are then spatially and temporally distributed and assimilated daily by the Coupled Aerosol and Tracer Transport model to the Brazilian developments on the Regional Atmospheric Modeling System (CATT-BRAMS) in order to perform the prognosis of related tracer concentrations. Three other biomass burning inventories, including GFEDv2 and EDGAR, are simultaneously used to compare the emission strength in terms of the resultant tracer distribution. We also assess the effect of using the daily time resolution of fire emissions by including runs with monthly-averaged emissions. We evaluate the performance of the model using the different emission estimation techniques by comparing the model results with direct measurements of carbon monoxide both near-surface and airborne, as well as remote sensing derived products. The model results obtained using the 3BEM methodology of estimation introduced in this paper show relatively good agreement with the direct measurements and MOPITT data product, suggesting the reliability of the model at local to regional scales.
Resumo:
A mathematical model is proposed to analyze the effects of acquired immunity on the transmission of schistosomiasis in the human host. From this model the prevalence curve dependent on four parameters can be obtained. These parameters were estimated fitting the data by the maximum likelihood method. The model showed a good retrieving capacity of real data from two endemic areas of schistosomiasis: Touros, Brazil (Schistosoma mansoni) and Misungwi, Tanzania (S. haematobium). Also, the average worm burden per person and the dispersion of parasite per person in the community can be obtained from the model. In this paper, the stabilizing effects of the acquired immunity assumption in the model are assessed in terms of the epidemiological variables as follows. Regarded to the prevalence curve, we calculate the confidence interval, and related to the average worm burden and the worm dispersion in the community, the sensitivity analysis (the range of the variation) of both variables with respect to their parameters is performed.
Resumo:
Modeling the vertical penetration of photosynthetically active radiation (PAR) through the ocean, and its utilization by phytoplankton, is fundamental to simulating marine primary production. The variation of attenuation and absorption of light with wavelength suggests that photosynthesis should be modeled at high spectral resolution, but this is computationally expensive. To model primary production in global 3d models, a balance between computer time and accuracy is necessary. We investigate the effects of varying the spectral resolution of the underwater light field and the photosynthetic efficiency of phytoplankton (α∗), on primary production using a 1d coupled ecosystem ocean turbulence model. The model is applied at three sites in the Atlantic Ocean (CIS (∼60°N), PAP (∼50°N) and ESTOC (∼30°N)) to include the effect of different meteorological forcing and parameter sets. We also investigate three different methods for modeling α∗ – as a fixed constant, varying with both wavelength and chlorophyll concentration [Bricaud, A., Morel, A., Babin, M., Allali, K., Claustre, H., 1998. Variations of light absorption by suspended particles with chlorophyll a concentration in oceanic (case 1) waters. Analysis and implications for bio-optical models. J. Geophys. Res. 103, 31033–31044], and using a non-spectral parameterization [Anderson, T.R., 1993. A spectrally averaged model of light penetration and photosynthesis. Limnol. Oceanogr. 38, 1403–1419]. After selecting the appropriate ecosystem parameters for each of the three sites we vary the spectral resolution of light and α∗ from 1 to 61 wavebands and study the results in conjunction with the three different α∗ estimation methods. The results show modeled estimates of ocean primary productivity are highly sensitive to the degree of spectral resolution and α∗. For accurate simulations of primary production and chlorophyll distribution we recommend a spectral resolution of at least six wavebands if α∗ is a function of wavelength and chlorophyll, and three wavebands if α∗ is a fixed value.
Resumo:
To understand the validity of d18O proxy records as indicators of past temperature change, a series of experiments was conducted using an atmospheric general circulation model fitted with water isotope tracers (Community Atmosphere Model version 3.0, IsoCAM). A pre-industrial simulation was performed as the control experiment, as well as a simulation with all the boundary conditions set to Last Glacial Maximum (LGM) values. Results from the pre-industrial and LGM simulations were compared to experiments in which the influence of individual boundary conditions (greenhouse gases, ice sheet albedo and topography, sea surface temperature (SST), and orbital parameters) were changed each at a time to assess their individual impact. The experiments were designed in order to analyze the spatial variations of the oxygen isotopic composition of precipitation (d18Oprecip) in response to individual climate factors. The change in topography (due to the change in land ice cover) played a significant role in reducing the surface temperature and d18Oprecip over North America. Exposed shelf areas and the ice sheet albedo reduced the Northern Hemisphere surface temperature and d18Oprecip further. A global mean cooling of 4.1 °C was simulated with combined LGM boundary conditions compared to the control simulation, which was in agreement with previous experiments using the fully coupled Community Climate System Model (CCSM3). Large reductions in d18Oprecip over the LGM ice sheets were strongly linked to the temperature decrease over them. The SST and ice sheet topography changes were responsible for most of the changes in the climate and hence the d18Oprecip distribution among the simulations.
Resumo:
The successful performance of a hydrological model is usually challenged by the quality of the sensitivity analysis, calibration and uncertainty analysis carried out in the modeling exercise and subsequent simulation results. This is especially important under changing climatic conditions where there are more uncertainties associated with climate models and downscaling processes that increase the complexities of the hydrological modeling system. In response to these challenges and to improve the performance of the hydrological models under changing climatic conditions, this research proposed five new methods for supporting hydrological modeling. First, a design of experiment aided sensitivity analysis and parameterization (DOE-SAP) method was proposed to investigate the significant parameters and provide more reliable sensitivity analysis for improving parameterization during hydrological modeling. The better calibration results along with the advanced sensitivity analysis for significant parameters and their interactions were achieved in the case study. Second, a comprehensive uncertainty evaluation scheme was developed to evaluate three uncertainty analysis methods, the sequential uncertainty fitting version 2 (SUFI-2), generalized likelihood uncertainty estimation (GLUE) and Parameter solution (ParaSol) methods. The results showed that the SUFI-2 performed better than the other two methods based on calibration and uncertainty analysis results. The proposed evaluation scheme demonstrated that it is capable of selecting the most suitable uncertainty method for case studies. Third, a novel sequential multi-criteria based calibration and uncertainty analysis (SMC-CUA) method was proposed to improve the efficiency of calibration and uncertainty analysis and control the phenomenon of equifinality. The results showed that the SMC-CUA method was able to provide better uncertainty analysis results with high computational efficiency compared to the SUFI-2 and GLUE methods and control parameter uncertainty and the equifinality effect without sacrificing simulation performance. Fourth, an innovative response based statistical evaluation method (RESEM) was proposed for estimating the uncertainty propagated effects and providing long-term prediction for hydrological responses under changing climatic conditions. By using RESEM, the uncertainty propagated from statistical downscaling to hydrological modeling can be evaluated. Fifth, an integrated simulation-based evaluation system for uncertainty propagation analysis (ISES-UPA) was proposed for investigating the effects and contributions of different uncertainty components to the total propagated uncertainty from statistical downscaling. Using ISES-UPA, the uncertainty from statistical downscaling, uncertainty from hydrological modeling, and the total uncertainty from two uncertainty sources can be compared and quantified. The feasibility of all the methods has been tested using hypothetical and real-world case studies. The proposed methods can also be integrated as a hydrological modeling system to better support hydrological studies under changing climatic conditions. The results from the proposed integrated hydrological modeling system can be used as scientific references for decision makers to reduce the potential risk of damages caused by extreme events for long-term water resource management and planning.
Resumo:
One of the biggest challenges that contaminant hydrogeology is facing, is how to adequately address the uncertainty associated with model predictions. Uncertainty arise from multiple sources, such as: interpretative error, calibration accuracy, parameter sensitivity and variability. This critical issue needs to be properly addressed in order to support environmental decision-making processes. In this study, we perform Global Sensitivity Analysis (GSA) on a contaminant transport model for the assessment of hydrocarbon concentration in groundwater. We provide a quantification of the environmental impact and, given the incomplete knowledge of hydrogeological parameters, we evaluate which are the most influential, requiring greater accuracy in the calibration process. Parameters are treated as random variables and a variance-based GSA is performed in a optimized numerical Monte Carlo framework. The Sobol indices are adopted as sensitivity measures and they are computed by employing meta-models to characterize the migration process, while reducing the computational cost of the analysis. The proposed methodology allows us to: extend the number of Monte Carlo iterations, identify the influence of uncertain parameters and lead to considerable saving computational time obtaining an acceptable accuracy.
Resumo:
In this study, regression models are evaluated for grouped survival data when the effect of censoring time is considered in the model and the regression structure is modeled through four link functions. The methodology for grouped survival data is based on life tables, and the times are grouped in k intervals so that ties are eliminated. Thus, the data modeling is performed by considering the discrete models of lifetime regression. The model parameters are estimated by using the maximum likelihood and jackknife methods. To detect influential observations in the proposed models, diagnostic measures based on case deletion, which are denominated global influence, and influence measures based on small perturbations in the data or in the model, referred to as local influence, are used. In addition to those measures, the local influence and the total influential estimate are also employed. Various simulation studies are performed and compared to the performance of the four link functions of the regression models for grouped survival data for different parameter settings, sample sizes and numbers of intervals. Finally, a data set is analyzed by using the proposed regression models. (C) 2010 Elsevier B.V. All rights reserved.
Influence of magnetically-induced E-fields on cardiac electric activity during MRI: A modeling study
Resumo:
In modern magnetic resonance imaging (MRI), patients are exposed to strong, time-varying gradient magnetic fields that may be able to induce electric fields (E-fields)/currents in tissues approaching the level of physiological significance. In this work we present theoretical investigations into induced E-fields in the thorax, and evaluate their potential influence on cardiac electric activity under the assumption that the sites of maximum E-field correspond to the myocardial stimulation threshold (an abnormal circumstance). Whole-body cylindrical and planar gradient coils were included in the model. The calculations of the induced fields are based on an efficient, quasi-static, finite-difference scheme and an anatomically realistic, whole-body model. The potential for cardiac stimulation was evaluated using an electrical model of the heart. Twelve-lead electrocardiogram (ECG) signals were simulated and inspected for arrhythmias caused by the applied fields for both healthy and diseased hearts. The simulations show that the shape of the thorax and the conductive paths significantly influence induced E-fields. In healthy patients, these fields are not sufficient to elicit serious arrhythmias with the use of contemporary gradient sets. However, raising the strength and number of repeated switching episodes of gradients, as is certainly possible in local chest gradient sets, could expose patients to increased risk. For patients with cardiac disease, the risk factors are elevated. By the use of this model, the sensitivity of cardiac pathologies, such as abnormal conductive pathways, to the induced fields generated by an MRI sequence can be investigated. (C) 2003 Wiley-Liss, Inc.
Resumo:
Background: Brown adipose tissue (BAT) plays an important role in whole body metabolism and could potentially mediate weight gain and insulin sensitivity. Although some imaging techniques allow BAT detection, there are currently no viable methods for continuous acquisition of BAT energy expenditure. We present a non-invasive technique for long term monitoring of BAT metabolism using microwave radiometry. Methods: A multilayer 3D computational model was created in HFSS™ with 1.5 mm skin, 3-10 mm subcutaneous fat, 200 mm muscle and a BAT region (2-6 cm3) located between fat and muscle. Based on this model, a log-spiral antenna was designed and optimized to maximize reception of thermal emissions from the target (BAT). The power absorption patterns calculated in HFSS™ were combined with simulated thermal distributions computed in COMSOL® to predict radiometric signal measured from an ultra-low-noise microwave radiometer. The power received by the antenna was characterized as a function of different levels of BAT metabolism under cold and noradrenergic stimulation. Results: The optimized frequency band was 1.5-2.2 GHz, with averaged antenna efficiency of 19%. The simulated power received by the radiometric antenna increased 2-9 mdBm (noradrenergic stimulus) and 4-15 mdBm (cold stimulus) corresponding to increased 15-fold BAT metabolism. Conclusions: Results demonstrated the ability to detect thermal radiation from small volumes (2-6 cm3) of BAT located up to 12 mm deep and to monitor small changes (0.5°C) in BAT metabolism. As such, the developed miniature radiometric antenna sensor appears suitable for non-invasive long term monitoring of BAT metabolism.
Resumo:
Among the largest resources for biological sequence data is the large amount of expressed sequence tags (ESTs) available in public and proprietary databases. ESTs provide information on transcripts but for technical reasons they often contain sequencing errors. Therefore, when analyzing EST sequences computationally, such errors must be taken into account. Earlier attempts to model error prone coding regions have shown good performance in detecting and predicting these while correcting sequencing errors using codon usage frequencies. In the research presented here, we improve the detection of translation start and stop sites by integrating a more complex mRNA model with codon usage bias based error correction into one hidden Markov model (HMM), thus generalizing this error correction approach to more complex HMMs. We show that our method maintains the performance in detecting coding sequences.
Resumo:
NR2E3, also called photoreceptor-specific nuclear receptor (PNR), is a transcription factor of the nuclear hormone receptor superfamily whose expression is uniquely restricted to photoreceptors. There, its physiological activity is essential for proper rod and cone photoreceptor development and maintenance. Thirty-two different mutations in NR2E3 have been identified in either homozygous or compound heterozygous state in the recessively inherited enhanced S-cone sensitivity syndrome (ESCS), Goldmann-Favre syndrome (GFS), and clumped pigmentary retinal degeneration (CPRD). The clinical phenotype common to all these patients is night blindness, rudimental or absent rod function, and hyperfunction of the "blue" S-cones. A single p.G56R mutation is inherited in a dominant manner and causes retinitis pigmentosa (RP). We have established a new locus-specific database for NR2E3 (www.LOVD.nl/eye), containing all reported mutations, polymorphisms, and unclassified sequence variants, including novel ones. A high proportion of mutations are located in the evolutionarily-conserved DNA-binding domains (DBDs) and ligand-binding domains (LBDs) of NR2E3. Based on homology modeling of these NR2E3 domains, we propose a structural localization of mutated residues. The high variability of clinical phenotypes observed in patients affected by NR2E3-linked retinal degenerations may be caused by different disease mechanisms, including absence of DNA-binding, altered interactions with transcriptional coregulators, and differential activity of modifier genes.
Resumo:
Elevated high-sensitivity C-reactive protein (hs-CRP) concentration is associated with an increased risk of cardiovascular disease but this association seems to be largely mediated via conventional cardiovascular risk factors. In particular, the association between hs-CRP and obesity has been extensively demonstrated and correlations are stronger in women than men. We used fractional polynomials-a method that allows flexible modeling of non linear relations-to investigate the dose/response mathematical relationship between hs-CRP and several indicators of adiposity in Caucasians (Switzerland) and Africans (Seychelles) surveyed in two population-based studies. This relationship was non-linear exhibiting a steeper slope for low levels of hs-CRP and a higher level in women. The observed sex difference in the relationship between hs-CRP and adiposity almost disappeared upon adjustment for leptin, suggesting that these sex differences might be partially mediated, by leptin. All these relationship were similar in Caucasians and Africans. This is the first report on a non-linear relation, stratified by gender, between hs-CRP and adiposity.
Resumo:
Bone marrow hematopoietic stem cells (HSCs) are responsible for both lifelong daily maintenance of all blood cells and for repair after cell loss. Until recently the cellular mechanisms by which HSCs accomplish these two very different tasks remained an open question. Biological evidence has now been found for the existence of two related mouse HSC populations. First, a dormant HSC (d-HSC) population which harbors the highest self-renewal potential of all blood cells but is only induced into active self-renewal in response to hematopoietic stress. And second, an active HSC (a-HSC) subset that by and large produces the progenitors and mature cells required for maintenance of day-to-day hematopoiesis. Here we present computational analyses further supporting the d-HSC concept through extensive modeling of experimental DNA label-retaining cell (LRC) data. Our conclusion that the presence of a slowly dividing subpopulation of HSCs is the most likely explanation (amongst the various possible causes including stochastic cellular variation) of the observed long term Bromodeoxyuridine (BrdU) retention, is confirmed by the deterministic and stochastic models presented here. Moreover, modeling both HSC BrdU uptake and dilution in three stages and careful treatment of the BrdU detection sensitivity permitted improved estimates of HSC turnover rates. This analysis predicts that d-HSCs cycle about once every 149-193 days and a-HSCs about once every 28-36 days. We further predict that, using LRC assays, a 75%-92.5% purification of d-HSCs can be achieved after 59-130 days of chase. Interestingly, the d-HSC proportion is now estimated to be around 30-45% of total HSCs - more than twice that of our previous estimate.