896 resultados para deduced optical model parameters


Relevância:

100.00% 100.00%

Publicador:

Resumo:

BACKGROUND: Ritonavir inhibition of cytochrome P450 3A4 decreases the elimination clearance of fentanyl by 67%. We used a pharmacokinetic model developed from published data to simulate the effect of sample patient-controlled epidural labor analgesic regimens on plasma fentanyl concentrations in the absence and presence of ritonavir-induced cytochrome P450 3A4 inhibition. METHODS: Fentanyl absorption from the epidural space was modeled using tanks-in-series delay elements. Systemic fentanyl disposition was described using a three-compartment pharmacokinetic model. Parameters for epidural drug absorption were estimated by fitting the model to reported plasma fentanyl concentrations measured after epidural administration. The validity of the model was assessed by comparing predicted plasma concentrations after epidural administration to published data. The effect of ritonavir was modeled as a 67% decrease in fentanyl elimination clearance. Plasma fentanyl concentrations were simulated for six sample patient-controlled epidural labor analgesic regimens over 24 h using ritonavir and control models. Simulated data were analyzed to determine if plasma fentanyl concentrations producing a 50% decrease in minute ventilation (6.1 ng/mL) were achieved. RESULTS: Simulated plasma fentanyl concentrations in the ritonavir group were higher than those in the control group for all sample labor analgesic regimens. Maximum plasma fentanyl concentrations were 1.8 ng/mL and 3.4 ng/mL for the normal and ritonavir simulations, respectively, and did not reach concentrations associated with 50% decrease in minute ventilation. CONCLUSION: Our model predicts that even with maximal clinical dosing regimens of epidural fentanyl over 24 h, ritonavir-induced cytochrome P450 3A4 inhibition is unlikely to produce plasma fentanyl concentrations associated with a decrease in minute ventilation.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The main conclusion of this dissertation is that global H2 production within young ocean crust (<10 Mya) is higher than currently recognized, in part because current estimates of H2 production accompanying the serpentinization of peridotite may be too low (Chapter 2) and in part because a number of abiogenic H2-producing processes have heretofore gone unquantified (Chapter 3). The importance of free H2 to a range of geochemical processes makes the quantitative understanding of H2 production advanced in this dissertation pertinent to an array of open research questions across the geosciences (e.g. the origin and evolution of life and the oxidation of the Earth’s atmosphere and oceans).

The first component of this dissertation (Chapter 2) examines H2 produced within young ocean crust [e.g. near the mid-ocean ridge (MOR)] by serpentinization. In the presence of water, olivine-rich rocks (peridotites) undergo serpentinization (hydration) at temperatures of up to ~500°C but only produce H2 at temperatures up to ~350°C. A simple analytical model is presented that mechanistically ties the process to seafloor spreading and explicitly accounts for the importance of temperature in H2 formation. The model suggests that H2 production increases with the rate of seafloor spreading and the net thickness of serpentinized peridotite (S-P) in a column of lithosphere. The model is applied globally to the MOR using conservative estimates for the net thickness of lithospheric S-P, our least certain model input. Despite the large uncertainties surrounding the amount of serpentinized peridotite within oceanic crust, conservative model parameters suggest a magnitude of H2 production (~1012 moles H2/y) that is larger than the most widely cited previous estimates (~1011 although previous estimates range from 1010-1012 moles H2/y). Certain model relationships are also consistent with what has been established through field studies, for example that the highest H2 fluxes (moles H2/km2 seafloor) are produced near slower-spreading ridges (<20 mm/y). Other modeled relationships are new and represent testable predictions. Principal among these is that about half of the H2 produced globally is produced off-axis beneath faster-spreading seafloor (>20 mm/y), a region where only one measurement of H2 has been made thus far and is ripe for future investigation.

In the second part of this dissertation (Chapter 3), I construct the first budget for free H2 in young ocean crust that quantifies and compares all currently recognized H2 sources and H2 sinks. First global estimates of budget components are proposed in instances where previous estimate(s) could not be located provided that the literature on that specific budget component was not too sparse to do so. Results suggest that the nine known H2 sources, listed in order of quantitative importance, are: Crystallization (6x1012 moles H2/y or 61% of total H2 production), serpentinization (2x1012 moles H2/y or 21%), magmatic degassing (7x1011 moles H2/y or 7%), lava-seawater interaction (5x1011 moles H2/y or 5%), low-temperature alteration of basalt (5x1011 moles H2/y or 5%), high-temperature alteration of basalt (3x1010 moles H2/y or <1%), catalysis (3x108 moles H2/y or <<1%), radiolysis (2x108 moles H2/y or <<1%), and pyrite formation (3x106 moles H2/y or <<1%). Next we consider two well-known H2 sinks, H2 lost to the ocean and H2 occluded within rock minerals, and our analysis suggests that both are of similar size (both are 6x1011 moles H2/y). Budgeting results suggest a large difference between H2 sources (total production = 1x1013 moles H2/y) and H2 sinks (total losses = 1x1011 moles H2/y). Assuming this large difference represents H2 consumed by microbes (total consumption = 9x1011 moles H2/y), we explore rates of primary production by the chemosynthetic, sub-seafloor biosphere. Although the numbers presented require further examination and future modifications, the analysis suggests that the sub-seafloor H2 budget is similar to the sub-seafloor CH4 budget in the sense that globally significant quantities of both of these reduced gases are produced beneath the seafloor but never escape the seafloor due to microbial consumption.

The third and final component of this dissertation (Chapter 4) explores the self-organization of barchan sand dune fields. In nature, barchan dunes typically exist as members of larger dune fields that display striking, enigmatic structures that cannot be readily explained by examining the dynamics at the scale of single dunes, or by appealing to patterns in external forcing. To explore the possibility that observed structures emerge spontaneously as a collective result of many dunes interacting with each other, we built a numerical model that treats barchans as discrete entities that interact with one another according to simplified rules derived from theoretical and numerical work, and from field observations: Dunes exchange sand through the fluxes that leak from the downwind side of each dune and are captured on their upstream sides; when dunes become sufficiently large, small dunes are born on their downwind sides (“calving”); and when dunes collide directly enough, they merge. Results show that these relatively simple interactions provide potential explanations for a range of field-scale phenomena including isolated patches of dunes and heterogeneous arrangements of similarly sized dunes in denser fields. The results also suggest that (1) dune field characteristics depend on the sand flux fed into the upwind boundary, although (2) moving downwind, the system approaches a common attracting state in which the memory of the upwind conditions vanishes. This work supports the hypothesis that calving exerts a first order control on field-scale phenomena; it prevents individual dunes from growing without bound, as single-dune analyses suggest, and allows the formation of roughly realistic, persistent dune field patterns.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Transcriptional regulation has been studied intensively in recent decades. One important aspect of this regulation is the interaction between regulatory proteins, such as transcription factors (TF) and nucleosomes, and the genome. Different high-throughput techniques have been invented to map these interactions genome-wide, including ChIP-based methods (ChIP-chip, ChIP-seq, etc.), nuclease digestion methods (DNase-seq, MNase-seq, etc.), and others. However, a single experimental technique often only provides partial and noisy information about the whole picture of protein-DNA interactions. Therefore, the overarching goal of this dissertation is to provide computational developments for jointly modeling different experimental datasets to achieve a holistic inference on the protein-DNA interaction landscape.

We first present a computational framework that can incorporate the protein binding information in MNase-seq data into a thermodynamic model of protein-DNA interaction. We use a correlation-based objective function to model the MNase-seq data and a Markov chain Monte Carlo method to maximize the function. Our results show that the inferred protein-DNA interaction landscape is concordant with the MNase-seq data and provides a mechanistic explanation for the experimentally collected MNase-seq fragments. Our framework is flexible and can easily incorporate other data sources. To demonstrate this flexibility, we use prior distributions to integrate experimentally measured protein concentrations.

We also study the ability of DNase-seq data to position nucleosomes. Traditionally, DNase-seq has only been widely used to identify DNase hypersensitive sites, which tend to be open chromatin regulatory regions devoid of nucleosomes. We reveal for the first time that DNase-seq datasets also contain substantial information about nucleosome translational positioning, and that existing DNase-seq data can be used to infer nucleosome positions with high accuracy. We develop a Bayes-factor-based nucleosome scoring method to position nucleosomes using DNase-seq data. Our approach utilizes several effective strategies to extract nucleosome positioning signals from the noisy DNase-seq data, including jointly modeling data points across the nucleosome body and explicitly modeling the quadratic and oscillatory DNase I digestion pattern on nucleosomes. We show that our DNase-seq-based nucleosome map is highly consistent with previous high-resolution maps. We also show that the oscillatory DNase I digestion pattern is useful in revealing the nucleosome rotational context around TF binding sites.

Finally, we present a state-space model (SSM) for jointly modeling different kinds of genomic data to provide an accurate view of the protein-DNA interaction landscape. We also provide an efficient expectation-maximization algorithm to learn model parameters from data. We first show in simulation studies that the SSM can effectively recover underlying true protein binding configurations. We then apply the SSM to model real genomic data (both DNase-seq and MNase-seq data). Through incrementally increasing the types of genomic data in the SSM, we show that different data types can contribute complementary information for the inference of protein binding landscape and that the most accurate inference comes from modeling all available datasets.

This dissertation provides a foundation for future research by taking a step toward the genome-wide inference of protein-DNA interaction landscape through data integration.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Forest fires can cause extensive damage to natural resources and properties. They can also destroy wildlife habitat, affect the forest ecosystem and threaten human lives. In this paper extreme wildland fires are analysed using a point process model for extremes. The model based on a generalised Pareto distribution is used to model data on acres of wildland burnt by extreme fire in the US since 1825. A semi-parametric smoothing approach is adapted with maximum likelihood method to estimate model parameters.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Forest fires can cause extensive damage to natural resources and properties. They can also destroy wildlife habitat, affect the forest ecosystem and threaten human lives. In this paper incidences of extreme wildland fires are modelled by a point process model which incorporates time-trend. A model based on a generalised Pareto distribution is used to model data on acres of wildland burnt by extreme fire in the US since 1825. A semi-parametric smoothing approach, which is very useful in exploratory analysis of changes in extremes, is illustrated with the maximum likelihood method to estimate model parameters.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Solder paste is the most widely used bonding material in the assembly of surface mount devices in electronic industries. It generally has a flocculated structure (show aggregation of solder particles), and hence are known to exhibit a thixotropic behavior. This is recognized by the decrease in apparent viscosity of paste material with time when subjected to a constant shear rate. The proper characterisation of this timedependent rheological behaviour of solder pastes is crucial for establishing the relationships between the pastes’ structure and flow behaviour; and for correlating the physical parameters with paste printing performance. In this paper, we present a novel method which has been developed for characterising the timedependent and non-Newtonian rheological behaviour of solder pastes as a function of shear rates. The objective of the study reported in this paper is to investigate the thixotropic build-up behaviour of solder pastes. The stretched exponential model(SEM) has been used to model the structural changes during the build-up process and to correlate model parameters with the paste printing process.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A review of polymer cure models used in microelectronics packaging applications reveals no clear consensus of the chemical rate constants for the cure reactions, or even of an effective model. The problem lies in the contrast between the actual cure process, which involves a sequence of distinct chemical reactions, and the models, which typically assume only one, (or two with some restrictions on the independence of their characteristic constants.) The standard techniques to determine the model parameters are based on differential scanning calorimetry (DSC), which cannot distinguish between the reactions, and hence yields results useful only under the same conditions, which completely misses the point of modeling. The obvious solution is for manufacturers to provide the modeling parameters, but failing that, an alternative experimental technique is required to determine individual reaction parameters, e.g. Fourier transform infra-red spectroscopy (FTIR).

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A review of polymer cure models used in microelectronics packaging applications reveals no clear consensus of the chemical rate constants for the cure reactions, or even of an effective model. The problem lies in the contrast between the actual cure process, which involves a sequence of distinct chemical reactions, and the models, which typically assume only one, (or two with some restrictions on the independence of their characteristic constants.) The standard techniques to determine the model parameters are based on differential scanning calorimetry (DSC), which cannot distinguish between the reactions, and hence yields results useful only under the same conditions, which completely misses the point of modeling. The obvious solution is for manufacturers to provide the modeling parameters, but failing that, an alternative experimental technique is required to determine individual reaction parameters, e.g. Fourier transform infra-red spectroscopy (FTIR).

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The recognition that urban groundwater is a potentially valuable resource for potable and industrial uses due to growing pressures on perceived less polluted rural groundwater has led to a requirement to assess the groundwater contamination risk in urban areas from industrial contaminants such as chlorinated solvents. The development of a probabilistic risk based management tool that predicts groundwater quality at potential new urban boreholes is beneficial in determining the best sites for future resource development. The Borehole Optimisation System (BOS) is a custom Geographic Information System (GIs) application that has been developed with the objective of identifying the optimum locations for new abstraction boreholes. BOS can be applied to any aquifer subject to variable contamination risk. The system is described in more detail by Tait et al. [Tait, N.G., Davison, J.J., Whittaker, J.J., Lehame, S.A. Lerner, D.N., 2004a. Borehole Optimisation System (BOS) - a GIs based risk analysis tool for optimising the use of urban groundwater. Environmental Modelling and Software 19, 1111-1124]. This paper applies the BOS model to an urban Permo-Triassic Sandstone aquifer in the city centre of Nottingham, UK. The risk of pollution in potential new boreholes from the industrial chlorinated solvent tetrachloroethene (PCE) was assessed for this region. The risk model was validated against contaminant concentrations from 6 actual field boreholes within the study area. In these studies the model generally underestimated contaminant concentrations. A sensitivity analysis showed that the most responsive model parameters were recharge, effective porosity and contaminant degradation rate. Multiple simulations were undertaken across the study area in order to create surface maps indicating areas of low PCE concentrations, thus indicating the best locations to place new boreholes. Results indicate that northeastern, eastern and central regions have the lowest potential PCE concentrations in abstraction groundwater and therefore are the best sites for locating new boreholes. These locations coincide with aquifer areas that are confined by low permeability Mercia Mudstone deposits. Conversely southern and northwestern areas are unconfined and have shallower depth to groundwater. These areas have the highest potential PCE concentrations. These studies demonstrate the applicability of BOS as a tool for informing decision makers on the development of urban groundwater resources. (c) 2007 Elsevier Ltd. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Phytoplankton total chlorophyll concentration (TCHLa) and phytoplankton size structure are two important ecological indicators in biological oceanography. Using high performance liquid chromatography (HPLC) pigment data, collected from surface waters along the Atlantic Meridional Transect (AMT), we examine temporal changes in TCHLa and phytoplankton size class (PSC: micro-, nano- and pico-phytoplankton) between 2003 and 2010 (September to November cruises only), in three ecological provinces of the Atlantic Ocean. The HPLC data indicate no significant change in TCHLa in northern and equatorial provinces, and an increase in the southern province. These changes were not significantly different to changes in TCHLa derived using satellite ocean-colour data over the same study period. Despite no change in AMT TCHLa in northern and equatorial provinces, significant differences in PSC were observed, related to changes in key diagnostic pigments (fucoxanthin, peridinin, 19′-hexanoyloxyfucoxanthin and zeaxanthin), with an increase in small cells (nano- and pico-phytoplankton) and a decrease in larger cells (micro-phytoplankton). When fitting a three-component model of phytoplankton size structure — designed to quantify the relationship between PSC and TCHLa to each AMT cruise, model parameters varied over the study period. Changes in the relationship between PSC and TCHLa have wide implications in ecology and marine biogeochemistry, and provide key information for the development and use of empirical ocean-colour algorithms. Results illustrate the importance of maintaining a time-series of in-situ observations in remote regions of the ocean, such as that acquired in the AMT programme.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Objective To present a first and second trimester Down syndrome screening strategy, whereby second-trimester marker determination is contingent on the first-trimester results. Unlike non-disclosure sequential screening (the Integrated test), which requires all women to have markers in both trimesters, this allows a large proportion of the women to complete screening in the first trimester. Methods Two first-trimester risk cut-offs defined three types of results: positive and referred for early diagnosis; negative with screening complete; and intermediate, needing second-trimester markers. Multivariate Gaussian modelling with Monte Carlo simulation was used to estimate the false-positive rate for a fixed 85% detection rate. The false-positive rate was evaluated for various early detection rates and early test completion rates. Model parameters were taken from the SURUSS trial. Results Completion of screening in the first trimester for 75% of women resulted in a 30% early detection rate and a 55% second trimester detected rate (net 85%) with a false-positive rate only 0.1% above that achievable by the Integrated test. The screen-positive rate was 0.1% in the first trimester and 4.7% for those continuing to be tested in the second trimester. If the early detection rate were to be increased to 45% or the early completion rate were to be increased to 80%, there would be a further 0.1% increase in the false-positive rate. Conclusion Contingent screening can achieve results comparable with the Integrated test but with earlier completion of screening for most women. Both strategies need to be evaluated in large-scale prospective studies particularly in relation to psychological impact and practicability.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Due to the complexity and inherent instability in polymer extrusion there is a need for process models which can be run on-line to optimise settings and control disturbances. First-principle models demand computationally intensive solution, while ‘black box’ models lack generalisation ability and physical process insight. This work examines a novel ‘grey box’ modelling technique which incorporates both prior physical knowledge and empirical data in generating intuitive models of the process. The models can be related to the underlying physical mechanisms in the extruder and have been shown to capture unpredictable effects of the operating conditions on process instability. Furthermore, model parameters can be related to material properties available from laboratory analysis and as such, lend themselves to re-tuning for different materials without extensive remodelling work.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper deals with Takagi-Sugeno (TS) fuzzy model identification of nonlinear systems using fuzzy clustering. In particular, an extended fuzzy Gustafson-Kessel (EGK) clustering algorithm, using robust competitive agglomeration (RCA), is developed for automatically constructing a TS fuzzy model from system input-output data. The EGK algorithm can automatically determine the 'optimal' number of clusters from the training data set. It is shown that the EGK approach is relatively insensitive to initialization and is less susceptible to local minima, a benefit derived from its agglomerate property. This issue is often overlooked in the current literature on nonlinear identification using conventional fuzzy clustering. Furthermore, the robust statistical concepts underlying the EGK algorithm help to alleviate the difficulty of cluster identification in the construction of a TS fuzzy model from noisy training data. A new hybrid identification strategy is then formulated, which combines the EGK algorithm with a locally weighted, least-squares method for the estimation of local sub-model parameters. The efficacy of this new approach is demonstrated through function approximation examples and also by application to the identification of an automatic voltage regulation (AVR) loop for a simulated 3 kVA laboratory micro-machine system.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Coxian phase-type distributions are a special type of Markov model that can be used to represent survival times in terms of phases through which an individual may progress until they eventually leave the system completely. Previous research has considered the Coxian phase-type distribution to be ideal in representing patient survival in hospital. However, problems exist in fitting the distributions. This paper investigates the problems that arise with the fitting process by simulating various Coxian phase-type models for the representation of patient survival and examining the estimated parameter values and eigenvalues obtained. The results indicate that numerical methods previously used for fitting the model parameters do not always converge. An alternative technique is therefore considered. All methods are influenced by the choice of initial parameter values. The investigation uses a data set of 1439 elderly patients and models their survival time, the length of time they spend in a UK hospital.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The identification of nonlinear dynamic systems using radial basis function (RBF) neural models is studied in this paper. Given a model selection criterion, the main objective is to effectively and efficiently build a parsimonious compact neural model that generalizes well over unseen data. This is achieved by simultaneous model structure selection and optimization of the parameters over the continuous parameter space. It is a mixed-integer hard problem, and a unified analytic framework is proposed to enable an effective and efficient two-stage mixed discrete-continuous; identification procedure. This novel framework combines the advantages of an iterative discrete two-stage subset selection technique for model structure determination and the calculus-based continuous optimization of the model parameters. Computational complexity analysis and simulation studies confirm the efficacy of the proposed algorithm.