957 resultados para Two fluid model
Resumo:
A mass‐balance model for Lake Superior was applied to polychlorinated biphenyls (PCBs), polybrominated diphenyl ethers (PBDEs), and mercury to determine the major routes of entry and the major mechanisms of loss from this ecosystem as well as the time required for each contaminant class to approach steady state. A two‐box model (water column, surface sediments) incorporating seasonally adjusted environmental parameters was used. Both numerical (forward Euler) and analytical solutions were employed and compared. For validation, the model was compared with current and historical concentrations and fluxes in the lake and sediments. Results for PCBs were similar to prior work showing that air‐water exchange is the most rapid input and loss process. The model indicates that mercury behaves similarly to a moderately‐chlorinated PCB, with air‐water exchange being a relatively rapid input and loss process. Modeled accumulation fluxes of PBDEs in sediments agreed with measured values reported in the literature. Wet deposition rates were about three times greater than dry particulate deposition rates for PBDEs. Gas deposition was an important process for tri‐ and tetra‐BDEs (BDEs 28 and 47), but not for higher‐brominated BDEs. Sediment burial was the dominant loss mechanism for most of the PBDE congeners while volatilization was still significant for tri‐ and tetra‐BDEs. Because volatilization is a relatively rapid loss process for both mercury and the most abundant PCBs (tri‐ through penta‐), the model predicts that similar times (from 2 ‐ 10 yr) are required for the compounds to approach steady state in the lake. The model predicts that if inputs of Hg(II) to the lake decrease in the future then concentrations of mercury in the lake will decrease at a rate similar to the historical decline in PCB concentrations following the ban on production and most uses in the U.S. In contrast, PBDEs are likely to respond more slowly if atmospheric concentrations are reduced in the future because loss by volatilization is a much slower process for PBDEs, leading to lesser overall loss rates for PBDEs in comparison to PCBs and mercury. Uncertainties in the chemical degradation rates and partitioning constants of PBDEs are the largest source of uncertainty in the modeled times to steady‐state for this class of chemicals. The modeled organic PBT loading rates are sensitive to uncertainties in scavenging efficiencies by rain and snow, dry deposition velocity, watershed runoff concentrations, and uncertainties in air‐water exchange such as the effect of atmospheric stability.
Resumo:
Stereoselectivity has to be considered for pharmacodynamic and pharmacokinetic features of ketamine. Stereoselective biotransformation of ketamine was investigated in equine microsomes in vitro. Concentration curves were constructed over time, and enzyme activity was determined for different substrate concentrations using equine liver and lung microsomes. The concentrations of R/S-ketamine and R/S-norketamine were determined by enantioselective capillary electrophoresis. A two-phase model based on Hill kinetics was used to analyze the biotransformation of R/S-ketamine into R/S-norketamine and, in a second step, into R/S-downstream metabolites. In liver and lung microsomes, levels of R-ketamine exceeded those of S-ketamine at all time points and S-norketamine exceeded R-norketamine at time points below the maximum concentration. In liver and lung microsomes, significant differences in the enzyme velocity (V(max)) were observed between S- and R-norketamine formation and between V(max) of S-norketamine formation when S-ketamine was compared to S-ketamine of the racemate. Our investigations in microsomal reactions in vitro suggest that stereoselective ketamine biotransformation in horses occurs in the liver and the lung with a slower elimination of S-ketamine in the presence of R-ketamine. Scaling of the in vitro parameters to liver and lung organ clearances provided an excellent fit with previously published in vivo data and confirmed a lung first-pass effect.
Resumo:
In this paper we analyze a dynamic agency problem where contracting parties do not know the agent's future productivity at the beginning of the relationship. We consider a two-period model where both the agent and the principal observe the agent's second-period productivity at the end of the first period. This observation is assumed to be non-verifiable information. We compare long-term contracts with short-term contracts with respect to their suitability to motivate effort in both periods. On the one hand, short-term contracts allow for a better fine-tuning of second-period incentives as they can be aligned with the agent's second-period productivity. On the other hand, in short-term contracts first-period effort incentives might be distorted as contracts have to be sequentially optimal. Hence, the difference between long-term and short-term contracts is characterized by a trade-off between inducing effort in the first and in the second period. We analyze the determinants of this trade-off and demonstrate its implications for performance measurement and information system design.
Resumo:
OBJECTIVE: Occupational low back pain (LBP) is considered to be the most expensive form of work disability, with the socioeconomic costs of persistent LBP exceeding the costs of acute and subacute LBP by far. This makes the early identification of patients at risk of developing persistent LBP essential, especially in working populations. The aim of the study was to evaluate both risk factors (for the development of persistent LBP) and protective factors (preventing the development of persistent LBP) in the same cohort. PARTICIPANTS: An inception cohort of 315 patients with acute to subacute or with recurrent LBP was recruited from 14 health practitioners (twelve general practitioners and two physiotherapists) across New Zealand. METHODS: Patients with persistent LBP at six-month follow-up were compared to patients with non-persistent LBP looking at occupational, psychological, biomedical and demographic/lifestyle predictors at baseline using multiple logistic regression analyses. All significant variables from the different domains were combined into a one predictor model. RESULTS: A final two-predictor model with an overall predictive value of 78% included social support at work (OR 0.67; 95%CI 0.45 to 0.99) and somatization (OR 1.08; 95%CI 1.01 to 1.15). CONCLUSIONS: Social support at work should be considered as a resource preventing the development of persistent LBP whereas somatization should be considered as a risk factor for the development of persistent LBP. Further studies are needed to determine if addressing these factors in workplace interventions for patients suffering from acute, subacute or recurrent LBP prevents subsequent development of persistent LBP.
Resumo:
Tyrosine hydroxylase (TH), the initial and rate limiting enzyme in the catecholaminergic biosynthetic pathway, is phosphorylated on multiple serine residues by multiple protein kinases. Although it has been demonstrated that many protein kinases are capable of phosphorylating and activating TH in vitro, it is less clear which protein kinases participate in the physiological regulation of catecholamine synthesis in situ. These studies were designed to determine if protein kinase C (PK-C) plays such a regulatory role.^ Stimulation of intact bovine adrenal chromaffin cells with phorbol esters results in stimulation of catecholamine synthesis, tyrosine hydroxylase phosphorylation and activation. These responses are both time and concentration dependent, and are specific for those phorbol ester analogues which activate PK-C. RP-HPLC analysis of TH tryptic phosphopeptides indicate that PK-C phosphorylates TH on three putative sites. One of these (pepetide 6) is the same as that phosphorylated by both cAMP-dependent protein kinase (PK-A) and calcium/calmodulin-dependent protein kinase (CaM-K). However, two of these sites (peptides 4 and 7) are unique, and, to date, have not been shown to be phosphorylated by any other protein kinase. These peptides correspond to those which are phosphorylated with a slow time course in response to stimulation of chromaffin cells with the natural agonist acetylcholine. The activation of TH produced by PK-C is most closely correlated with the phosphorylation of peptide 6. But, as evident from pH profiles of tyrosine hydroxylase activity, phosphorylation of peptides 4 and 7 affect the expression of the activation produced by phosphorylation of peptide 6.^ These data support a role for PK-C in the control of TH activity, and suggest a two stage model for the physiological regulation of catecholamine synthesis by phosphorylation in response to cholinergic stimulation. An initial fast response, which appears to be mediated by CaM-K, and a slower, sustained response which appears to be mediated by PK-C. In addition, the multiple site phosphorylation of TH provides a mechanism whereby the regulation of catecholamine synthesis appears to be under the control of multiple protein kinases, and allows for the convergence of multiple, diverse physiological and biochemical signals. ^
Resumo:
Retinoblastoma is a pediatric tumor which is associated with somatic and inherited mutations at the retinoblastoma susceptibility locus, RB1. Although most cases of retinoblastoma fit the previously described 'two hit' model of oncogenesis, the molecular mechanisms underlying rare instances of familial retinoblastoma with reduced penetrance are not well understood. To better understand this phenomenon, a study was undertaken to uncover the molecular cause of low penetrance retinoblastoma in a limited number of families. In one case, a unique cryptic splicing alteration was discovered in the RB1 gene and demonstrated to reduce the level of normal RB1 mRNA produced. Penetrance in the large family known to carry this mutation is less than 50%. Data about the mutation supports a theory that reduced penetrance retinoblastoma is caused by partially functional mutations in RB1. In another family, three independent causes of retinoblastoma or the related phenotype of retinoma were indicated by linkage analysis, a finding unique in retinoblastoma research. A novel polymorphism restricted to Asian populations was also described during the course of this study. ^
Resumo:
In order to fully describe the construct of empowerment and to determine possible measures for this construct in racially and ethnically diverse neighborhoods, a qualitative study based on Grounded Theory was conducted at both the individual and collective levels. Participants for the study included 49 grassroots experts on community empowerment who were interviewed through semi-structured interviews and focus groups. The researcher also conducted field observations as part of the research protocol.^ The results of the study identified benchmarks of individual and collective empowerment and hundreds of possible markers of collective empowerment applicable in diverse communities. Results also indicated that community involvement is essential in the selection and implementation of proper measures. Additional findings were that the construct of empowerment involves specific principles of empowering relationships and particular motivational factors. All of these findings lead to a two dimensional model of empowerment based on the concepts of relationships among members of a collective body and the collective body's desire for socio-political change.^ These results suggest that the design, implementation, and evaluation of programs that foster empowerment must be based on collaborative ventures between the population being served and program staff because of the interactive, synergistic nature of the construct. In addition, empowering programs should embrace specific principles and processes of individual and collective empowerment in order to maximize their effectiveness and efficiency. And finally, the results suggest that collaboratively choosing markers to measure the processes and outcomes of empowerment in the main systems and populations living in today's multifaceted communities is a useful mechanism to determine change. ^
Resumo:
Nuclear morphometry (NM) uses image analysis to measure features of the cell nucleus which are classified as: bulk properties, shape or form, and DNA distribution. Studies have used these measurements as diagnostic and prognostic indicators of disease with inconclusive results. The distributional properties of these variables have not been systematically investigated although much of the medical data exhibit nonnormal distributions. Measurements are done on several hundred cells per patient so summary measurements reflecting the underlying distribution are needed.^ Distributional characteristics of 34 NM variables from prostate cancer cells were investigated using graphical and analytical techniques. Cells per sample ranged from 52 to 458. A small sample of patients with benign prostatic hyperplasia (BPH), representing non-cancer cells, was used for general comparison with the cancer cells.^ Data transformations such as log, square root and 1/x did not yield normality as measured by the Shapiro-Wilks test for normality. A modulus transformation, used for distributions having abnormal kurtosis values, also did not produce normality.^ Kernel density histograms of the 34 variables exhibited non-normality and 18 variables also exhibited bimodality. A bimodality coefficient was calculated and 3 variables: DNA concentration, shape and elongation, showed the strongest evidence of bimodality and were studied further.^ Two analytical approaches were used to obtain a summary measure for each variable for each patient: cluster analysis to determine significant clusters and a mixture model analysis using a two component model having a Gaussian distribution with equal variances. The mixture component parameters were used to bootstrap the log likelihood ratio to determine the significant number of components, 1 or 2. These summary measures were used as predictors of disease severity in several proportional odds logistic regression models. The disease severity scale had 5 levels and was constructed of 3 components: extracapsulary penetration (ECP), lymph node involvement (LN+) and seminal vesicle involvement (SV+) which represent surrogate measures of prognosis. The summary measures were not strong predictors of disease severity. There was some indication from the mixture model results that there were changes in mean levels and proportions of the components in the lower severity levels. ^
Resumo:
A population-genetic analysis is performed of a two-locus two-allele model, in which the primary locus has a major effect on a quantitative trait that is under frequency-dependent disruptive selection caused by intraspecific competition for a continuum of resources. The modifier locus determines the degree of dominance at the trait level. We establish the conditions when a modifier allele can invade and when it becomes fixed if sufficiently frequent. In general, these are not equivalent because an unstable internal equilibrium may exist and the condition for successful invasion of the modifier is more restrictive than that for eventual fixation from already high frequency. However, successful invasion implies global fixation, i.e., fixation from any initial condition. Modifiers of large effect can become fixed, and also invade, in a wider parameter range than modifiers of small effect. We also study modifiers with a direct, frequency-independent deleterious fitness effect. We show that they can invade if they induce a sufficiently high level of dominance and if disruptive selection on the ecological trait is strong enough. For deleterious modifiers, successful invasion no longer implies global fixation because they can become stuck at an intermediate frequency due to a stable internal equilibrium. Although the conditions for invasion and for fixation if sufficiently frequent are independent of the linkage relation between the two loci, the rate of spread depends strongly on it. The present study provides further support to the view that evolution of dominance may be an efficient mechanism to remove unfit heterozygotes that are maintained by balancing selection. It also demonstrates that an invasion analysis of mutants of very small effect is insufficient to obtain a full understanding of the evolutionary dynamics under frequency-dependent selection.
Resumo:
We study the evolution of higher levels of dominance as a response to negative frequency-dependent selection. In contrast to previous studies, we focus on the effect of assortative mating on the evolution of dominance under frequency-dependent intraspecific competition. We analyze a two-locus two-allele model, in which the primary locus has a major effect on a quantitative trait that is under a mixture of frequency-independent stabilizing selection, density-dependent selection, and frequency-dependent selection caused by intraspecific competition for a continuum of resources. The second (modifier) locus determines the degree of dominance at the trait level. Additionally, the population mates assortatively with respect to similarities in the ecological trait. Our analysis shows that the parameter region in which dominance can be established decreases if small levels of assortment are introduced. In addition, the degree of dominance that can be established also decreases. In contrast, if assortment is intermediate, sexual selection for extreme types can be established, which leads to evolution of higher levels of dominance than under random mating. For modifiers with large effects, intermediate levels of assortative mating are most favorable for the evolution of dominance. For large modifiers, the speed of fixation can even be higher for intermediate levels of assortative mating than for random mating.
Resumo:
Competitive Market Segmentation Abstract In a two-firm model where each firm sells a high-quality and a low-quality version of a product, customers differ with respect to their brand preferences and their attitudes towards quality. We show that the standard result of quality-independent markups crucially depends on the assumption that the customers' valuation of quality is identical across firms. Once we relax this assumption, competition across qualities leads to second-degree price discrimination. We find that markups on low-quality products are higher if consuming a low-quality product involves a firm-specific disutility. Likewise, markups on high-quality products are higher if consuming a high-quality product creates a firm-specific surplus. Selection upon Wage Posting Abstract We discuss a model of a job market where firms announce salaries. Thereupon, they decide through the evaluation of a productivity test whether to hire applicants. Candidates for a job are locked in once they have applied at a given employer. Hence, such a market exhibits a specific form of the bargain-then-ripoff principle. With a single firm, the outcome is efficient. Under competition, what might be called "positive selection" leads to market failure. Thus our model provides a rationale for very small employment probabilities in some sectors. Exclusivity Clauses: Enhancing Competition, Raising Prices Abstract In a setting where retailers and suppliers compete for each other by offering binding contracts, exclusivity clauses serve as a competitive device. As a result of these clauses, firms addressed by contracts only accept the most favorable deal. Thus the contract-issuing parties have to squeeze their final customers and transfer the surplus within the vertical supply chain. We elaborate to what extent the resulting allocation depends on the sequence of play and discuss the implications of a ban on exclusivity clauses.
Resumo:
Context. To date, calculations of planet formation have mainly focused on dynamics, and only a few have considered the chemical composition of refractory elements and compounds in the planetary bodies. While many studies have been concentrating on the chemical composition of volatile compounds (such as H2O, CO, CO2) incorporated in planets, only a few have considered the refractory materials as well, although they are of great importance for the formation of rocky planets. Aims. We computed the abundance of refractory elements in planetary bodies formed in stellar systems with a solar chemical composition by combining models of chemical composition and planet formation. We also considered the formation of refractory organic compounds, which have been ignored in previous studies on this topic. Methods. We used the commercial software package HSC Chemistry to compute the condensation sequence and chemical composition of refractory minerals incorporated into planets. The problem of refractory organic material is approached with two distinct model calculations: the first considers that the fraction of atoms used in the formation of organic compounds is removed from the system (i.e., organic compounds are formed in the gas phase and are non-reactive); and the second assumes that organic compounds are formed by the reaction between different compounds that had previously condensed from the gas phase. Results. Results show that refractory material represents more than 50 wt % of the mass of solids accreted by the simulated planets with up to 30 wt % of the total mass composed of refractory organic compounds. Carbide and silicate abundances are consistent with C/O and Mg/Si elemental ratios of 0.5 and 1.02 for the Sun. Less than 1 wt % of carbides are present in the planets, and pyroxene and olivine are formed in similar quantities. The model predicts planets that are similar in composition to those of the solar system. Starting from a common initial nebula composition, it also shows that a wide variety of chemically different planets can form, which means that the differences in planetary compositions are due to differences in the planetary formation process. Conclusions. We show that a model in which refractory organic material is absent from the system is more compatible with observations. The use of a planet formation model is essential to form a wide diversity of planets in a consistent way.
Resumo:
PURPOSE To explore differential methylation of HAAO, HOXD3, LGALS3, PITX2, RASSF1 and TDRD1 as a molecular tool to predict biochemical recurrence (BCR) in patients with high-risk prostate cancer (PCa). METHODS A multiplexed nested methylation-specific PCR was applied to quantify promoter methylation of the selected markers in five cell lines, 42 benign prostatic hyperplasia (BPH) and 71 high-risk PCa tumor samples. Uni- and multivariate Cox regression models were used to assess the importance of the methylation level in predicting BCR. RESULTS A PCa-specific methylation marker HAAO in combination with HOXD3 and a hypomethylation marker TDRD1 distinguished PCa samples (>90 % of tumor cells each) from BPH with a sensitivity of 0.99 and a specificity of 0.95. High methylation of PITX2, HOXD3 and RASSF1, as well as low methylation of TDRD1, appeared to be significantly associated with a higher risk for BCR (HR 3.96, 3.44, 2.80 and 2.85, correspondingly) after correcting for established risk factors. When DNA methylation was treated as a continuous variable, a two-gene model PITX2 × 0.020677 + HOXD3 × 0.0043132 proved to be the best predictor of BCR (HR 4.85) compared with the individual markers. This finding was confirmed in an independent set of 52 high-risk PCa tumor samples (HR 11.89). CONCLUSIONS Differential promoter methylation of HOXD3, PITX2, RASSF1 and TDRD1 emerges as an independent predictor of BCR in high-risk PCa patients. A two-gene continuous DNA methylation model "PITX2 × 0.020677 + HOXD3 × 0.0043132" is a better predictor of BCR compared with individual markers.
Resumo:
BACKGROUND AND OBJECTIVES We aimed to study the impact of size, maturation and cytochrome P450 2D6 (CYP2D6) genotype activity score as predictors of intravenous tramadol disposition. METHODS Tramadol and O-desmethyl tramadol (M1) observations in 295 human subjects (postmenstrual age 25 weeks to 84.8 years, weight 0.5-186 kg) were pooled. A population pharmacokinetic analysis was performed using a two-compartment model for tramadol and two additional M1 compartments. Covariate analysis included weight, age, sex, disease characteristics (healthy subject or patient) and CYP2D6 genotype activity. A sigmoid maturation model was used to describe age-related changes in tramadol clearance (CLPO), M1 formation clearance (CLPM) and M1 elimination clearance (CLMO). A phenotype-based mixture model was used to identify CLPM polymorphism. RESULTS Differences in clearances were largely accounted for by maturation and size. The time to reach 50 % of adult clearance (TM50) values was used to describe maturation. CLPM (TM50 39.8 weeks) and CLPO (TM50 39.1 weeks) displayed fast maturation, while CLMO matured slower, similar to glomerular filtration rate (TM50 47 weeks). The phenotype-based mixture model identified a slow and a faster metabolizer group. Slow metabolizers comprised 9.8 % of subjects with 19.4 % of faster metabolizer CLPM. Low CYP2D6 genotype activity was associated with lower (25 %) than faster metabolizer CLPM, but only 32 % of those with low genotype activity were in the slow metabolizer group. CONCLUSIONS Maturation and size are key predictors of variability. A two-group polymorphism was identified based on phenotypic M1 formation clearance. Maturation of tramadol elimination occurs early (50 % of adult value at term gestation).
Resumo:
Chrysophyte cysts are recognized as powerful proxies of cold-season temperatures. In this paper we use the relationship between chrysophyte assemblages and the number of days below 4 °C (DB4 °C) in the epilimnion of a lake in northern Poland to develop a transfer function and to reconstruct winter severity in Poland for the last millennium. DB4 °C is a climate variable related to the length of the winter. Multivariate ordination techniques were used to study the distribution of chrysophytes from sediment traps of 37 low-land lakes distributed along a variety of environmental and climatic gradients in northern Poland. Of all the environmental variables measured, stepwise variable selection and individual Redundancy analyses (RDA) identified DB4 °C as the most important variable for chrysophytes, explaining a portion of variance independent of variables related to water chemistry (conductivity, chlorides, K, sulfates), which were also important. A quantitative transfer function was created to estimate DB4 °C from sedimentary assemblages using partial least square regression (PLS). The two-component model (PLS-2) had a coefficient of determination of View the MathML sourceRcross2 = 0.58, with root mean squared error of prediction (RMSEP, based on leave-one-out) of 3.41 days. The resulting transfer function was applied to an annually-varved sediment core from Lake Żabińskie, providing a new sub-decadal quantitative reconstruction of DB4 °C with high chronological accuracy for the period AD 1000–2010. During Medieval Times (AD 1180–1440) winters were generally shorter (warmer) except for a decade with very long and severe winters around AD 1260–1270 (following the AD 1258 volcanic eruption). The 16th and 17th centuries and the beginning of the 19th century experienced very long severe winters. Comparison with other European cold-season reconstructions and atmospheric indices for this region indicates that large parts of the winter variability (reconstructed DB4 °C) is due to the interplay between the oscillations of the zonal flow controlled by the North Atlantic Oscillation (NAO) and the influence of continental anticyclonic systems (Siberian High, East Atlantic/Western Russia pattern). Differences with other European records are attributed to geographic climatological differences between Poland and Western Europe (Low Countries, Alps). Striking correspondence between the combined volcanic and solar forcing and the DB4 °C reconstruction prior to the 20th century suggests that winter climate in Poland responds mostly to natural forced variability (volcanic and solar) and the influence of unforced variability is low.