109 resultados para Non-polarizable Water Models


Relevância:

30.00% 30.00%

Publicador:

Resumo:

1. Identifying those areas suitable for recolonization by threatened species is essential to support efficient conservation policies. Habitat suitability models (HSM) predict species' potential distributions, but the quality of their predictions should be carefully assessed when the species-environment equilibrium assumption is violated.2. We studied the Eurasian otter Lutra lutra, whose numbers are recovering in southern Italy. To produce widely applicable results, we chose standard HSM procedures and looked for the models' capacities in predicting the suitability of a recolonization area. We used two fieldwork datasets: presence-only data, used in the Ecological Niche Factor Analyses (ENFA), and presence-absence data, used in a Generalized Linear Model (GLM). In addition to cross-validation, we independently evaluated the models with data from a recolonization event, providing presences on a previously unoccupied river.3. Three of the models successfully predicted the suitability of the recolonization area, but the GLM built with data before the recolonization disagreed with these predictions, missing the recolonized river's suitability and badly describing the otter's niche. Our results highlighted three points of relevance to modelling practices: (1) absences may prevent the models from correctly identifying areas suitable for a species spread; (2) the selection of variables may lead to randomness in the predictions; and (3) the Area Under Curve (AUC), a commonly used validation index, was not well suited to the evaluation of model quality, whereas the Boyce Index (CBI), based on presence data only, better highlighted the models' fit to the recolonization observations.4. For species with unstable spatial distributions, presence-only models may work better than presence-absence methods in making reliable predictions of suitable areas for expansion. An iterative modelling process, using new occurrences from each step of the species spread, may also help in progressively reducing errors.5. Synthesis and applications. Conservation plans depend on reliable models of the species' suitable habitats. In non-equilibrium situations, such as the case for threatened or invasive species, models could be affected negatively by the inclusion of absence data when predicting the areas of potential expansion. Presence-only methods will here provide a better basis for productive conservation management practices.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The lung possesses specific transport systems that intra- and extracellularly maintain salt and fluid balance necessary for its function. At birth, the lungs rapidly transform into a fluid (Na(+))-absorbing organ to enable efficient gas exchange. Alveolar fluid clearance, which mainly depends on sodium transport in alveolar epithelial cells, is an important mechanism by which excess water in the alveoli is reabsorbed during the resolution of pulmonary edema. In this review, we will focus and summarize on the role of ENaC in alveolar lung liquid clearance and discuss recent data from mouse models with altered activity of epithelial sodium channel function in the lung, and more specifically in alveolar fluid clearance. Recent data studying mice with hyperactivity of ENaC or mice with reduced ENaC activity clearly illustrate the impaired lung fluid clearance in these adult mice. Further understanding of the physiological role of ENaC and its regulatory proteins implicated in salt and water balance in the alveolar cells may therefore help to develop new therapeutic strategies to improve gas exchange in pulmonary edema.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

BACKGROUND: The aim of this study was to explore the predictive value of longitudinal self-reported adherence data on viral rebound. METHODS: Individuals in the Swiss HIV Cohort Study on combined antiretroviral therapy (cART) with RNA <50 copies/ml over the previous 3 months and who were interviewed about adherence at least once prior to 1 March 2007 were eligible. Adherence was defined in terms of missed doses of cART (0, 1, 2 or >2) in the previous 28 days. Viral rebound was defined as RNA >500 copies/ml. Cox regression models with time-independent and -dependent covariates were used to evaluate time to viral rebound. RESULTS: A total of 2,664 individuals and 15,530 visits were included. Across all visits, missing doses were reported as follows: 1 dose 14.7%, 2 doses 5.1%, >2 doses 3.8% taking <95% of doses 4.5% and missing > or =2 consecutive doses 3.2%. In total, 308 (11.6%) patients experienced viral rebound. After controlling for confounding variables, self-reported non-adherence remained significantly associated with the rate of occurrence of viral rebound (compared with zero missed doses: 1 dose, hazard ratio [HR] 1.03, 95% confidence interval [CI] 0.72-1.48; 2 doses, HR 2.17, 95% CI 1.46-3.25; >2 doses, HR 3.66, 95% CI 2.50-5.34). Several variables significantly associated with an increased risk of viral rebound irrespective of adherence were identified: being on a protease inhibitor or triple nucleoside regimen (compared with a non-nucleoside reverse transcriptase inhibitor), >5 previous cART regimens, seeing a less-experienced physician, taking co-medication, and a shorter time virally suppressed. CONCLUSIONS: A simple self-report adherence questionnaire repeatedly administered provides a sensitive measure of non-adherence that predicts viral rebound.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Abstract Traditionally, the common reserving methods used by the non-life actuaries are based on the assumption that future claims are going to behave in the same way as they did in the past. There are two main sources of variability in the processus of development of the claims: the variability of the speed with which the claims are settled and the variability between the severity of the claims from different accident years. High changes in these processes will generate distortions in the estimation of the claims reserves. The main objective of this thesis is to provide an indicator which firstly identifies and quantifies these two influences and secondly to determine which model is adequate for a specific situation. Two stochastic models were analysed and the predictive distributions of the future claims were obtained. The main advantage of the stochastic models is that they provide measures of variability of the reserves estimates. The first model (PDM) combines one conjugate family Dirichlet - Multinomial with the Poisson distribution. The second model (NBDM) improves the first one by combining two conjugate families Poisson -Gamma (for distribution of the ultimate amounts) and Dirichlet Multinomial (for distribution of the incremental claims payments). It was found that the second model allows to find the speed variability in the reporting process and development of the claims severity as function of two above mentioned distributions' parameters. These are the shape parameter of the Gamma distribution and the Dirichlet parameter. Depending on the relation between them we can decide on the adequacy of the claims reserve estimation method. The parameters have been estimated by the Methods of Moments and Maximum Likelihood. The results were tested using chosen simulation data and then using real data originating from the three lines of business: Property/Casualty, General Liability, and Accident Insurance. These data include different developments and specificities. The outcome of the thesis shows that when the Dirichlet parameter is greater than the shape parameter of the Gamma, resulting in a model with positive correlation between the past and future claims payments, suggests the Chain-Ladder method as appropriate for the claims reserve estimation. In terms of claims reserves, if the cumulated payments are high the positive correlation will imply high expectations for the future payments resulting in high claims reserves estimates. The negative correlation appears when the Dirichlet parameter is lower than the shape parameter of the Gamma, meaning low expected future payments for the same high observed cumulated payments. This corresponds to the situation when claims are reported rapidly and fewer claims remain expected subsequently. The extreme case appears in the situation when all claims are reported at the same time leading to expectations for the future payments of zero or equal to the aggregated amount of the ultimate paid claims. For this latter case, the Chain-Ladder is not recommended.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Laudisa (Found. Phys. 38:1110-1132, 2008) claims that experimental research on the class of non-local hidden-variable theories introduced by Leggett is misguided, because these theories are irrelevant for the foundations of quantum mechanics. I show that Laudisa's arguments fail to establish the pessimistic conclusion he draws from them. In particular, it is not the case that Leggett-inspired research is based on a mistaken understanding of Bell's theorem, nor that previous no-hidden-variable theorems already exclude Leggett's models. Finally, I argue that the framework of Bohmian mechanics brings out the importance of Leggett tests, rather than proving their irrelevance, as Laudisa supposes.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

During the first hours after release of petroleum at sea, crude oil hydrocarbons partition rapidly into air and water. However, limited information is available about very early evaporation and dissolution processes. We report on the composition of the oil slick during the first day after a permitted, unrestrained 4.3 m(3) oil release conducted on the North Sea. Rapid mass transfers of volatile and soluble hydrocarbons were observed, with >50% of ≤C17 hydrocarbons disappearing within 25 h from this oil slick of <10 km(2) area and <10 μm thickness. For oil sheen, >50% losses of ≤C16 hydrocarbons were observed after 1 h. We developed a mass transfer model to describe the evolution of oil slick chemical composition and water column hydrocarbon concentrations. The model was parametrized based on environmental conditions and hydrocarbon partitioning properties estimated from comprehensive two-dimensional gas chromatography (GC×GC) retention data. The model correctly predicted the observed fractionation of petroleum hydrocarbons in the oil slick resulting from evaporation and dissolution. This is the first report on the broad-spectrum compositional changes in oil during the first day of a spill at the sea surface. Expected outcomes under other environmental conditions are discussed, as well as comparisons to other models.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

It is well established that interactions between CD4(+) T cells and major histocompatibility complex class II (MHCII) positive antigen-presenting cells (APCs) of hematopoietic origin play key roles in both the maintenance of tolerance and the initiation and development of autoimmune and inflammatory disorders. In sharp contrast, despite nearly three decades of intensive research, the functional relevance of MHCII expression by non-hematopoietic tissue-resident cells has remained obscure. The widespread assumption that MHCII expression by non-hematopoietic APCs has an impact on autoimmune and inflammatory diseases has in most instances neither been confirmed nor excluded by indisputable in vivo data. Here we review and put into perspective conflicting in vitro and in vivo results on the putative impact of MHCII expression by non-hematopoietic APCs-in both target organs and secondary lymphoid tissues-on the initiation and development of representative autoimmune and inflammatory disorders. Emphasis will be placed on the lacunar status of our knowledge in this field. We also discuss new mouse models-developed on the basis of our understanding of the molecular mechanisms that regulate MHCII expression-that constitute valuable tools for filling the severe gaps in our knowledge on the functions of non-hematopoietic APCs in inflammatory conditions.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The Cenozoic sedimentary record revealed by the Integrated Ocean Drilling Program's Arctic Coring Expedition (ACEX) to the Lomonosov Ridge microcontinent in 2004 is characterized by an unconformity attributed to the period 44-18 Ma. According to conventional thermal kinematic models, the microcontinent should have subsided to >1 km depth owing to rifting and subsequent separation from the Barents-Kara Sea margin at 56 Ma. We propose an alternative model incorporating a simple pressure-temperature (P-T) relation for mantle density. Using this model, we can explain the missing stratigraphic section by post-breakup uplift and erosion. The pattern of linear magnetic anomalies and the spreading geometry imply that the generation of oceanic crust in the central Eurasia Basin could have been restricted and confined by non-volcanic thinning of the mantle lithosphere at an early stage (ca. 56-40 Ma). In response to a rise in temperature, the mantle mineral composition may have changed through breakdown of spinet peridotite and formation of less dense plagioclase peridotite. The consequence of lithosphere heating and related mineral phase transitions would be post-breakup uplift followed by rapid subsidence to the deep-water environment observed on the Lomonosov Ridge today.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Fructose is mainly consumed with added sugars (sucrose and high fructose corn syrup), and represents up to 10% of total energy intake in the US and in several European countries. This hexose is essentially metabolized in splanchnic tissues, where it is converted into glucose, glycogen, lactate, and, to a minor extent, fatty acids. In animal models, high fructose diets cause the development of obesity, insulin resistance, diabetes mellitus, and dyslipidemia. Ectopic lipid deposition in the liver is an early occurrence upon fructose exposure, and is tightly linked to hepatic insulin resistance. In humans, there is strong evidence, based on several intervention trials, that fructose overfeeding increases fasting and postprandial plasma triglyceride concentrations, which are related to stimulation of hepatic de novo lipogenesis and VLDL-TG secretion, together with decreased VLDL-TG clearance. However, in contrast to animal models, fructose intakes as high as 200 g/day in humans only modestly decreases hepatic insulin sensitivity, and has no effect on no whole body (muscle) insulin sensitivity. A possible explanation may be that insulin resistance and dysglycemia develop mostly in presence of sustained fructose exposures associated with changes in body composition. Such effects are observed with high daily fructose intakes, and there is no solid evidence that fructose, when consumed in moderate amounts, has deleterious effects. There is only limited information regarding the effects of fructose on intrahepatic lipid concentrations. In animal models, high fructose diets clearly stimulate hepatic de novo lipogenesis and cause hepatic steatosis. In addition, some observations suggest that fructose may trigger hepatic inflammation and stimulate the development of hepatic fibrosis. This raises the possibility that fructose may promote the progression of non-alcoholic fatty liver disease to its more severe forms, i.e. non-alcoholic steatohepatitis and cirrhosis. In humans, a short-term fructose overfeeding stimulates de novo lipogenesis and significantly increases intrahepatic fat concentration, without however reaching the proportion encountered in non-alcoholic fatty liver diseases. Whether consumption of lower amounts of fructose over prolonged periods may contribute to the pathogenesis of NAFLD has not been convincingly documented in epidemiological studies and remains to be further assessed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Intensity-modulated radiotherapy (IMRT) treatment plan verification by comparison with measured data requires having access to the linear accelerator and is time consuming. In this paper, we propose a method for monitor unit (MU) calculation and plan comparison for step and shoot IMRT based on the Monte Carlo code EGSnrc/BEAMnrc. The beamlets of an IMRT treatment plan are individually simulated using Monte Carlo and converted into absorbed dose to water per MU. The dose of the whole treatment can be expressed through a linear matrix equation of the MU and dose per MU of every beamlet. Due to the positivity of the absorbed dose and MU values, this equation is solved for the MU values using a non-negative least-squares fit optimization algorithm (NNLS). The Monte Carlo plan is formed by multiplying the Monte Carlo absorbed dose to water per MU with the Monte Carlo/NNLS MU. Several treatment plan localizations calculated with a commercial treatment planning system (TPS) are compared with the proposed method for validation. The Monte Carlo/NNLS MUs are close to the ones calculated by the TPS and lead to a treatment dose distribution which is clinically equivalent to the one calculated by the TPS. This procedure can be used as an IMRT QA and further development could allow this technique to be used for other radiotherapy techniques like tomotherapy or volumetric modulated arc therapy.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background: Patient change talk (CT) during brief motivational interventions (BMI) has been linked with subsequent changes in drinking in clinical settings but this link has not been clearly established among young people in non-clinical populations. Objective: To determine which of several CT dimensions assessed during an effective BMI delivered in a non-clinical setting to 20-year old men are associated with drinking 6 months later. Methods: Of 125 individuals receiving a face-to-face BMI session (15.8 ± 5.4 minutes), we recorded and coded a subsample of 42 sessions using the Motivational Interviewing Skill Code 2.1. Each patient change talk utterance was categorized as `Reason´, `Ability´, `Desire´, `Need´, `Commitment´, `Taking steps´, or `Other´. Each utterance was graded according to its strength (absolute value from 1 to 3) and direction (i.e. towards (positive sign) or away (negative sign) from change/in favor of status quo). `Ability´, `Desire´, and `Need´ to change (`ADN´) were grouped together since these codes were too scarce to conduct analyses. Mean strength scores over the entire session were computed for each dimension and later dichotomized in towards change (i.e. mean core > 0) and away from change/in favor of status quo. Negative binomial regression models were used to assess the relationship between CT dimensions and drinking 6 months later, adjusting for drinking at baseline. Results: Compared to subjects with a `Taking steps´ score away from change/in favor of status quo, subjects with a positive `Taking steps´ score reported significantly less drinking 6 months later (Incidence Rate Ration [IRR] for drinks per week: 0.56, 95% Confidence Interval [CI] 0.31, 1.00). IRR (95%CI) for subjects with a positive `ADN´ score was 0.58, (0.32, 1.03). For subjects with a positive `Reason´, `Commitment´, and `Other´ scores, IRR (95%CI) were 1.28 (0.77; 2.12) 1.63 (0.85; 3.14) and 1.03 (0.61; 1.72), respectively. Conclusion: A change talk dimension reflecting steps taken towards change (`Taking steps´) is associated with less drinking 6 months later among young men receiving a BMI in a non-clinical setting. Encouraging patients to take steps towa change may be a worthy objective for clinicians and may explain BMI efficacy.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

It has been repeatedly debated which strategies people rely on in inference. These debates have been difficult to resolve, partially because hypotheses about the decision processes assumed by these strategies have typically been formulated qualitatively, making it hard to test precise quantitative predictions about response times and other behavioral data. One way to increase the precision of strategies is to implement them in cognitive architectures such as ACT-R. Often, however, a given strategy can be implemented in several ways, with each implementation yielding different behavioral predictions. We present and report a study with an experimental paradigm that can help to identify the correct implementations of classic compensatory and non-compensatory strategies such as the take-the-best and tallying heuristics, and the weighted-linear model.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Depth-averaged velocities and unit discharges within a 30 km reach of one of the world's largest rivers, the Rio Parana, Argentina, were simulated using three hydrodynamic models with different process representations: a reduced complexity (RC) model that neglects most of the physics governing fluid flow, a two-dimensional model based on the shallow water equations, and a three-dimensional model based on the Reynolds-averaged Navier-Stokes equations. Row characteristics simulated using all three models were compared with data obtained by acoustic Doppler current profiler surveys at four cross sections within the study reach. This analysis demonstrates that, surprisingly, the performance of the RC model is generally equal to, and in some instances better than, that of the physics based models in terms of the statistical agreement between simulated and measured flow properties. In addition, in contrast to previous applications of RC models, the present study demonstrates that the RC model can successfully predict measured flow velocities. The strong performance of the RC model reflects, in part, the simplicity of the depth-averaged mean flow patterns within the study reach and the dominant role of channel-scale topographic features in controlling the flow dynamics. Moreover, the very low water surface slopes that typify large sand-bed rivers enable flow depths to be estimated reliably in the RC model using a simple fixed-lid planar water surface approximation. This approach overcomes a major problem encountered in the application of RC models in environments characterised by shallow flows and steep bed gradients. The RC model is four orders of magnitude faster than the physics based models when performing steady-state hydrodynamic calculations. However, the iterative nature of the RC model calculations implies a reduction in computational efficiency relative to some other RC models. A further implication of this is that, if used to simulate channel morphodynamics, the present RC model may offer only a marginal advantage in terms of computational efficiency over approaches based on the shallow water equations. These observations illustrate the trade off between model realism and efficiency that is a key consideration in RC modelling. Moreover, this outcome highlights a need to rethink the use of RC morphodynamic models in fluvial geomorphology and to move away from existing grid-based approaches, such as the popular cellular automata (CA) models, that remain essentially reductionist in nature. In the case of the world's largest sand-bed rivers, this might be achieved by implementing the RC model outlined here as one element within a hierarchical modelling framework that would enable computationally efficient simulation of the morphodynamics of large rivers over millennial time scales. (C) 2012 Elsevier B.V. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Continuous field mapping has to address two conflicting remote sensing requirements when collecting training data. On one hand, continuous field mapping trains fractional land cover and thus favours mixed training pixels. On the other hand, the spectral signature has to be preferably distinct and thus favours pure training pixels. The aim of this study was to evaluate the sensitivity of training data distribution along fractional and spectral gradients on the resulting mapping performance. We derived four continuous fields (tree, shrubherb, bare, water) from aerial photographs as response variables and processed corresponding spectral signatures from multitemporal Landsat 5 TM data as explanatory variables. Subsequent controlled experiments along fractional cover gradients were then based on generalised linear models. Resulting fractional and spectral distribution differed between single continuous fields, but could be satisfactorily trained and mapped. Pixels with fractional or without respective cover were much more critical than pure full cover pixels. Error distribution of continuous field models was non-uniform with respect to horizontal and vertical spatial distribution of target fields. We conclude that a sampling for continuous field training data should be based on extent and densities in the fractional and spectral, rather than the real spatial space. Consequently, adequate training plots are most probably not systematically distributed in the real spatial space, but cover the gradient and covariate structure of the fractional and spectral space well. (C) 2009 International Society for Photogrammetry and Remote Sensing, Inc. (ISPRS). Published by Elsevier B.V. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Glial cell line-derived neurotrophic factor (GDNF) is one of the candidate molecules among neurotrophic factors proposed for a potential treatment of retinitis pigmentosa (RP). It must be administered repeatedly or through sustained releasing systems to exert prolonged neuroprotective effects. In the dystrophic Royal College of Surgeon's (RCS) rat model of RP, we found that endogenous GDNF levels dropped during retinal degeneration time course, opening a therapeutic window for GDNF supplementation. We showed that after a single electrotransfer of 30 μg of GDNF-encoding plasmid in the rat ciliary muscle, GDNF was produced for at least 7 months. Morphometric, electroretinographic and optokinetic analyses highlighted that this continuous release of GDNF delayed photoreceptors (PRs) as well as retinal functions loss until at least 70 days of age in RCS rats. Unexpectedly, increasing the GDNF secretion level accelerated PR degeneration and the loss of electrophysiological responses. This is the first report: (i) demonstrating the efficacy of GDNF delivery through non-viral gene therapy in RP; (ii) establishing the efficacy of intravitreal administration of GDNF in RP associated with a mutation in the retinal pigment epithelium; and (iii) warning against potential toxic effects of GDNF within the eye/retina.