973 resultados para Probability Weight : Rank-dependent Utility
Resumo:
We detail the automatic construction of R matrices corresponding to (the tensor products of) the (O-m\alpha(n)) families of highest-weight representations of the quantum superalgebras Uq[gl(m\n)]. These representations are irreducible, contain a free complex parameter a, and are 2(mn)-dimensional. Our R matrices are actually (sparse) rank 4 tensors, containing a total of 2(4mn) components, each of which is in general an algebraic expression in the two complex variables q and a. Although the constructions are straightforward, we describe them in full here, to fill a perceived gap in the literature. As the algorithms are generally impracticable for manual calculation, we have implemented the entire process in MATHEMATICA; illustrating our results with U-q [gl(3\1)]. (C) 2002 Published by Elsevier Science B.V.
Resumo:
The effect of dietary chromium supplementation on glucose and insulin metabolism in healthy, non-obese cats was evaluated. Thirty-two cats were randomly divided into four groups and fed experimental diets consisting of a standard diet with 0 ppb (control), 150 ppb, 300 ppb, or 600 ppb added chromium as chromium tripicolinate. Intravenous glucose tolerance, insulin tolerance and insulin sensitivity tests with minimal model analysis were performed before and after 6 weeks of feeding the test diets. During the glucose tolerance test, glucose concentrations, area under the glucose concentration-time curve, and glucose half-life (300 ppb only), were significantly lower after the trial in cats supplemented with 300 ppb and 600 ppb chromium, compared with values before the trial. Fasting glucose concentrations measured on a different day in the biochemistry profile were also significantly lower after supplementation with 600 ppb chromium. There were no significant differences in insulin concentrations or indices in either the glucose or insulin tolerance tests following chromium supplementation, nor were there any differences between groups before or after the dietary trial. Importantly, this study has shown a small but significant, dose-dependent improvement in glucose tolerance in healthy, non-obese cats supplemented with dietary chromium. Further long-term studies are warranted to determine if the addition of chromium to feline diets is advantageous. Cats most likely to benefit are those with glucose intolerance and insulin resistance from lack of exercise, obesity and old age. Healthy cats at risk of glucose intolerance and diabetes from underlying low insulin sensitivity or genetic factors may also benefit from long-term chromium supplementation. (C) 2002 ESFM and AAFP.
Resumo:
The extent to which density-dependent processes regulate natural populations is the subject of an ongoing debate. We contribute evidence to this debate showing that density-dependent processes influence the population dynamics of the ectoparasite Aponomma hydrosauri (Acari: Ixodidae), a tick species that infests reptiles in Australia. The first piece of evidence comes from an unusually long-term dataset on the distribution of ticks among individual hosts. If density-dependent processes are influencing either host mortality or vital rates of the parasite population, and those distributions can be approximated with negative binomial distributions, then general host-parasite models predict that the aggregation coefficient of the parasite distribution will increase with the average intensity of infections. We fit negative binomial distributions to the frequency distributions of ticks on hosts, and find that the estimated aggregation coefficient k increases with increasing average tick density. This pattern indirectly implies that one or more vital rates of the tick population must be changing with increasing tick density, because mortality rates of the tick's main host, the sleepy lizard, Tiliqua rugosa, are unaffected by changes in tick burdens. Our second piece of evidence is a re-analysis of experimental data on the attachment success of individual ticks to lizard hosts using generalized linear modelling. The probability of successful engorgement decreases with increasing numbers of ticks attached to a host. This is direct evidence of a density-dependent process that could lead to an increase in the aggregation coefficient of tick distributions described earlier. The population-scale increase in the aggregation coefficient is indirect evidence of a density-dependent process or processes sufficiently strong to produce a population-wide pattern, and thus also likely to influence population regulation. The direct observation of a density-dependent process is evidence of at least part of the responsible mechanism.
Resumo:
Lipid homeostasis is controlled by the peroxisome proliferator-activated receptors (PPARalpha, -beta/delta, and -gamma) that function as fatty acid-dependent DNA-binding proteins that regulate lipid metabolism. In vitro and in vivo genetic and pharmacological studies have demonstrated PPARalpha regulates lipid catabolism. In contrast, PPARgamma regulates the conflicting process of lipid storage. However, relatively little is known about PPARbeta/delta in the context of target tissues, target genes, lipid homeostasis, and functional overlap with PPARalpha and -gamma. PPARbeta/delta, a very low-density lipoprotein sensor, is abundantly expressed in skeletal muscle, a major mass peripheral tissue that accounts for approximately 40% of total body weight. Skeletal muscle is a metabolically active tissue, and a primary site of glucose metabolism, fatty acid oxidation, and cholesterol efflux. Consequently, it has a significant role in insulin sensitivity, the blood-lipid profile, and lipid homeostasis. Surprisingly, the role of PPARbeta/delta in skeletal muscle has not been investigated. We utilize selective PPARalpha, -beta/delta, -gamma, and liver X receptor agonists in skeletal muscle cells to understand the functional role of PPARbeta/delta, and the complementary and/or contrasting roles of PPARs in this major mass peripheral tissue. Activation of PPARbeta/delta by GW501516 in skeletal muscle cells induces the expression of genes involved in preferential lipid utilization, beta-oxidation, cholesterol efflux, and energy uncoupling. Furthermore, we show that treatment of muscle cells with GW501516 increases apolipoprotein-A1 specific efflux of intracellular cholesterol, thus identifying this tissue as an important target of PPARbeta/delta agonists. Interestingly, fenofibrate induces genes involved in fructose uptake, and glycogen formation. In contrast, rosiglitazone-mediated activation of PPARgamma induces gene expression associated with glucose uptake, fatty acid synthesis, and lipid storage. Furthermore, we show that the PPAR-dependent reporter in the muscle carnitine palmitoyltransferase-1 promoter is directly regulated by PPARbeta/delta, and not PPARalpha in skeletal muscle cells in a PPARgamma coactivator-1-dependent manner. This study demonstrates that PPARs have distinct roles in skeletal muscle cells with respect to the regulation of lipid, carbohydrate, and energy homeostasis. Moreover, we surmise that PPARgamma/delta agonists would increase fatty acid catabolism, cholesterol efflux, and energy expenditure in muscle, and speculate selective activators of PPARbeta/delta may have therapeutic utility in the treatment of hyperlipidemia, atherosclerosis, and obesity.
Resumo:
Given the heterogeneity of effect sizes within the population for any treatment, identifying moderators of outcomes is critical [1]. In weight management programs, there is a high individual variability in terms of weight loss and an overall modest success [2]. Some people will adopt and sustain attitudes and behaviors associated with weight loss, while others won’t [3]. To predict weight loss outcome just from the subject’s baseline information would be very valuable [4,5]. It would allow to: - Better match between treatments and individuals - Identify the participants with less probability of success (or potential dropouts) in a given treatment and direct them to alternative therapies - Target limited resources to those most likely to succeed - Increase cost-effectiveness and improve success rates of the programs Few studies have been dedicated to describe baseline predictors of treatment success. The Healthy Weight for Life (USA) study is one of the few. Its findings are now being cross-validated in Portuguese samples. This paper describes these cross-cultural comparisons.
Resumo:
We calculate the equilibrium thermodynamic properties, percolation threshold, and cluster distribution functions for a model of associating colloids, which consists of hard spherical particles having on their surfaces three short-ranged attractive sites (sticky spots) of two different types, A and B. The thermodynamic properties are calculated using Wertheim's perturbation theory of associating fluids. This also allows us to find the onset of self-assembly, which can be quantified by the maxima of the specific heat at constant volume. The percolation threshold is derived, under the no-loop assumption, for the correlated bond model: In all cases it is two percolated phases that become identical at a critical point, when one exists. Finally, the cluster size distributions are calculated by mapping the model onto an effective model, characterized by a-state-dependent-functionality (f) over bar and unique bonding probability (p) over bar. The mapping is based on the asymptotic limit of the cluster distributions functions of the generic model and the effective parameters are defined through the requirement that the equilibrium cluster distributions of the true and effective models have the same number-averaged and weight-averaged sizes at all densities and temperatures. We also study the model numerically in the case where BB interactions are missing. In this limit, AB bonds either provide branching between A-chains (Y-junctions) if epsilon(AB)/epsilon(AA) is small, or drive the formation of a hyperbranched polymer if epsilon(AB)/epsilon(AA) is large. We find that the theoretical predictions describe quite accurately the numerical data, especially in the region where Y-junctions are present. There is fairly good agreement between theoretical and numerical results both for the thermodynamic (number of bonds and phase coexistence) and the connectivity properties of the model (cluster size distributions and percolation locus).
Resumo:
Mestrado em Tecnologia de Diagnóstico e Intervenção Cardiovascular. Área de especialização: Ultrassonografia Cardiovascular.
Resumo:
OBJECTIVE: To examine whether the low birth weight (LBW) paradox exists in Brazil. METHODS: LBW and cesarean section rates between 1995 and 2007 were estimated based on data from SINASC (Brazilian Live Births Database). Infant mortality rates (IMRs) were obtained using an indirect method that correct for underreporting. Schooling information was obtained from census data. Trends in LBW rate were assessed using joinpoint regression models. The correlations between LBW rate and other indicators were graphically assessed by lowess regression and tested using Spearman's rank correlation. RESULTS: In Brazil, LBW rate trends were non-linear and non-significant: the rate dropped from 7.9% in 1995 to 7.7% in 2000, then increased to 8.2% in 2003 and remained nearly steady thereafter at 8.2% in 2007. However, trends varied among Brazilian regions: there were significant increases in the North from 1999 to 2003 (2.7% per year), and in the South (1.0% per year) and Central-West regions (0.6% per year) from 1995 to 2007. For the entire period studied, higher LBW and lower IMRs were seen in more developed compared to less developed regions. In Brazilian States, in 2005, the higher the IMR rate, the lower the LBW rate (p=0.009); the lower the low schooling rate, the lower the LBW rate (p=0.007); the higher the number of neonatal intensive care beds per 1,000 live births, the higher the LBW rate (p=0.036). CONCLUSIONS: The low birth weight paradox was seen in Brazil. LBW rate is increasing in some Brazilian regions. Regional differences in LBW rate seem to be more associated to availability of perinatal care services than underlying social conditions.
Resumo:
In this study, the concentration probability distributions of 82 pharmaceutical compounds detected in the effluents of 179 European wastewater treatment plants were computed and inserted into a multimedia fate model. The comparative ecotoxicological impact of the direct emission of these compounds from wastewater treatment plants on freshwater ecosystems, based on a potentially affected fraction (PAF) of species approach, was assessed to rank compounds based on priority. As many pharmaceuticals are acids or bases, the multimedia fate model accounts for regressions to estimate pH-dependent fate parameters. An uncertainty analysis was performed by means of Monte Carlo analysis, which included the uncertainty of fate and ecotoxicity model input variables, as well as the spatial variability of landscape characteristics on the European continental scale. Several pharmaceutical compounds were identified as being of greatest concern, including 7 analgesics/anti-inflammatories, 3 β-blockers, 3 psychiatric drugs, and 1 each of 6 other therapeutic classes. The fate and impact modelling relied extensively on estimated data, given that most of these compounds have little or no experimental fate or ecotoxicity data available, as well as a limited reported occurrence in effluents. The contribution of estimated model input variables to the variance of freshwater ecotoxicity impact, as well as the lack of experimental abiotic degradation data for most compounds, helped in establishing priorities for further testing. Generally, the effluent concentration and the ecotoxicity effect factor were the model input variables with the most significant effect on the uncertainty of output results.
Resumo:
The development of high spatial resolution airborne and spaceborne sensors has improved the capability of ground-based data collection in the fields of agriculture, geography, geology, mineral identification, detection [2, 3], and classification [4–8]. The signal read by the sensor from a given spatial element of resolution and at a given spectral band is a mixing of components originated by the constituent substances, termed endmembers, located at that element of resolution. This chapter addresses hyperspectral unmixing, which is the decomposition of the pixel spectra into a collection of constituent spectra, or spectral signatures, and their corresponding fractional abundances indicating the proportion of each endmember present in the pixel [9, 10]. Depending on the mixing scales at each pixel, the observed mixture is either linear or nonlinear [11, 12]. The linear mixing model holds when the mixing scale is macroscopic [13]. The nonlinear model holds when the mixing scale is microscopic (i.e., intimate mixtures) [14, 15]. The linear model assumes negligible interaction among distinct endmembers [16, 17]. The nonlinear model assumes that incident solar radiation is scattered by the scene through multiple bounces involving several endmembers [18]. Under the linear mixing model and assuming that the number of endmembers and their spectral signatures are known, hyperspectral unmixing is a linear problem, which can be addressed, for example, under the maximum likelihood setup [19], the constrained least-squares approach [20], the spectral signature matching [21], the spectral angle mapper [22], and the subspace projection methods [20, 23, 24]. Orthogonal subspace projection [23] reduces the data dimensionality, suppresses undesired spectral signatures, and detects the presence of a spectral signature of interest. The basic concept is to project each pixel onto a subspace that is orthogonal to the undesired signatures. As shown in Settle [19], the orthogonal subspace projection technique is equivalent to the maximum likelihood estimator. This projection technique was extended by three unconstrained least-squares approaches [24] (signature space orthogonal projection, oblique subspace projection, target signature space orthogonal projection). Other works using maximum a posteriori probability (MAP) framework [25] and projection pursuit [26, 27] have also been applied to hyperspectral data. In most cases the number of endmembers and their signatures are not known. Independent component analysis (ICA) is an unsupervised source separation process that has been applied with success to blind source separation, to feature extraction, and to unsupervised recognition [28, 29]. ICA consists in finding a linear decomposition of observed data yielding statistically independent components. Given that hyperspectral data are, in given circumstances, linear mixtures, ICA comes to mind as a possible tool to unmix this class of data. In fact, the application of ICA to hyperspectral data has been proposed in reference 30, where endmember signatures are treated as sources and the mixing matrix is composed by the abundance fractions, and in references 9, 25, and 31–38, where sources are the abundance fractions of each endmember. In the first approach, we face two problems: (1) The number of samples are limited to the number of channels and (2) the process of pixel selection, playing the role of mixed sources, is not straightforward. In the second approach, ICA is based on the assumption of mutually independent sources, which is not the case of hyperspectral data, since the sum of the abundance fractions is constant, implying dependence among abundances. This dependence compromises ICA applicability to hyperspectral images. In addition, hyperspectral data are immersed in noise, which degrades the ICA performance. IFA [39] was introduced as a method for recovering independent hidden sources from their observed noisy mixtures. IFA implements two steps. First, source densities and noise covariance are estimated from the observed data by maximum likelihood. Second, sources are reconstructed by an optimal nonlinear estimator. Although IFA is a well-suited technique to unmix independent sources under noisy observations, the dependence among abundance fractions in hyperspectral imagery compromises, as in the ICA case, the IFA performance. Considering the linear mixing model, hyperspectral observations are in a simplex whose vertices correspond to the endmembers. Several approaches [40–43] have exploited this geometric feature of hyperspectral mixtures [42]. Minimum volume transform (MVT) algorithm [43] determines the simplex of minimum volume containing the data. The MVT-type approaches are complex from the computational point of view. Usually, these algorithms first find the convex hull defined by the observed data and then fit a minimum volume simplex to it. Aiming at a lower computational complexity, some algorithms such as the vertex component analysis (VCA) [44], the pixel purity index (PPI) [42], and the N-FINDR [45] still find the minimum volume simplex containing the data cloud, but they assume the presence in the data of at least one pure pixel of each endmember. This is a strong requisite that may not hold in some data sets. In any case, these algorithms find the set of most pure pixels in the data. Hyperspectral sensors collects spatial images over many narrow contiguous bands, yielding large amounts of data. For this reason, very often, the processing of hyperspectral data, included unmixing, is preceded by a dimensionality reduction step to reduce computational complexity and to improve the signal-to-noise ratio (SNR). Principal component analysis (PCA) [46], maximum noise fraction (MNF) [47], and singular value decomposition (SVD) [48] are three well-known projection techniques widely used in remote sensing in general and in unmixing in particular. The newly introduced method [49] exploits the structure of hyperspectral mixtures, namely the fact that spectral vectors are nonnegative. The computational complexity associated with these techniques is an obstacle to real-time implementations. To overcome this problem, band selection [50] and non-statistical [51] algorithms have been introduced. This chapter addresses hyperspectral data source dependence and its impact on ICA and IFA performances. The study consider simulated and real data and is based on mutual information minimization. Hyperspectral observations are described by a generative model. This model takes into account the degradation mechanisms normally found in hyperspectral applications—namely, signature variability [52–54], abundance constraints, topography modulation, and system noise. The computation of mutual information is based on fitting mixtures of Gaussians (MOG) to data. The MOG parameters (number of components, means, covariances, and weights) are inferred using the minimum description length (MDL) based algorithm [55]. We study the behavior of the mutual information as a function of the unmixing matrix. The conclusion is that the unmixing matrix minimizing the mutual information might be very far from the true one. Nevertheless, some abundance fractions might be well separated, mainly in the presence of strong signature variability, a large number of endmembers, and high SNR. We end this chapter by sketching a new methodology to blindly unmix hyperspectral data, where abundance fractions are modeled as a mixture of Dirichlet sources. This model enforces positivity and constant sum sources (full additivity) constraints. The mixing matrix is inferred by an expectation-maximization (EM)-type algorithm. This approach is in the vein of references 39 and 56, replacing independent sources represented by MOG with mixture of Dirichlet sources. Compared with the geometric-based approaches, the advantage of this model is that there is no need to have pure pixels in the observations. The chapter is organized as follows. Section 6.2 presents a spectral radiance model and formulates the spectral unmixing as a linear problem accounting for abundance constraints, signature variability, topography modulation, and system noise. Section 6.3 presents a brief resume of ICA and IFA algorithms. Section 6.4 illustrates the performance of IFA and of some well-known ICA algorithms with experimental data. Section 6.5 studies the ICA and IFA limitations in unmixing hyperspectral data. Section 6.6 presents results of ICA based on real data. Section 6.7 describes the new blind unmixing scheme and some illustrative examples. Section 6.8 concludes with some remarks.
Resumo:
The intensification of agricultural productivity is an important challenge worldwide. However, environmental stressors can provide challenges to this intensification. The progressive occurrence of the cyanotoxins cylindrospermopsin (CYN) and microcystin-LR (MC-LR) as a potential consequence of eutrophication and climate change is of increasing concern in the agricultural sector because it has been reported that these cyanotoxins exert harmful effects in crop plants. A proteomic-based approach has been shown to be a suitable tool for the detection and identification of the primary responses of organisms exposed to cyanotoxins. The aim of this study was to compare the leaf-proteome profiles of lettuce plants exposed to environmentally relevant concentrations of CYN and a MC-LR/CYN mixture. Lettuce plants were exposed to 1, 10, and 100 lg/l CYN and a MC-LR/CYN mixture for five days. The proteins of lettuce leaves were separated by twodimensional electrophoresis (2-DE), and those that were differentially abundant were then identified by matrix-assisted laser desorption/ionization time of flight-mass spectrometry (MALDI-TOF/TOF MS). The biological functions of the proteins that were most represented in both experiments were photosynthesis and carbon metabolism and stress/defense response. Proteins involved in protein synthesis and signal transduction were also highly observed in the MC-LR/CYN experiment. Although distinct protein abundance patterns were observed in both experiments, the effects appear to be concentration-dependent, and the effects of the mixture were clearly stronger than those of CYN alone. The obtained results highlight the putative tolerance of lettuce to CYN at concentrations up to 100 lg/l. Furthermore, the combination of CYN with MC-LR at low concentrations (1 lg/l) stimulated a significant increase in the fresh weight (fr. wt) of lettuce leaves and at the proteomic level resulted in the increase in abundance of a high number of proteins. In contrast, many proteins exhibited a decrease in abundance or were absent in the gels of the simultaneous exposure to 10 and 100 lg/l MC-LR/CYN. In the latter, also a significant decrease in the fr. wt of lettuce leaves was obtained. These findings provide important insights into the molecular mechanisms of the lettuce response to CYN and MC-LR/CYN and may contribute to the identification of potential protein markers of exposure and proteins that may confer tolerance to CYN and MC-LR/CYN. Furthermore, because lettuce is an important crop worldwide, this study may improve our understanding of the potential impact of these cyanotoxins on its quality traits (e.g., presence of allergenic proteins).
Resumo:
OBJECTIVES: Nevirapine is widely used for the treatment of HIV-1 infection; however, its chronic use has been associated with severe liver and skin toxicity. Women are at increased risk for these toxic events, but the reasons for the sex-related differences are unclear. Disparities in the biotransformation of nevirapine and the generation of toxic metabolites between men and women might be the underlying cause. The present work aimed to explore sex differences in nevirapine biotransformation as a potential factor in nevirapine-induced toxicity. METHODS: All included subjects were adults who had been receiving 400 mg of nevirapine once daily for at least 1 month. Blood samples were collected and the levels of nevirapine and its phase I metabolites were quantified by HPLC. Anthropometric and clinical data, and nevirapine metabolite profiles, were assessed for sex-related differences. RESULTS: A total of 52 patients were included (63% were men). Body weight was lower in women (P = 0.028) and female sex was associated with higher alkaline phosphatase (P = 0.036) and lactate dehydrogenase (P = 0.037) levels. The plasma concentrations of nevirapine (P = 0.030) and the metabolite 3-hydroxy-nevirapine (P = 0.035), as well as the proportions of the metabolites 12-hydroxy-nevirapine (P = 0.037) and 3-hydroxy-nevirapine (P = 0.001), were higher in women, when adjusted for body weight. CONCLUSIONS: There was a sex-dependent variation in nevirapine biotransformation, particularly in the generation of the 12-hydroxy-nevirapine and 3-hydroxy-nevirapine metabolites. These data are consistent with the sex-dependent formation of toxic reactive metabolites, which may contribute to the sex-dependent dimorphic profile of nevirapine toxicity.
Resumo:
Colour polymorphism in vertebrates is usually under genetic control and may be associated with variation in physiological traits. The melanocortin 1 receptor (Mc1r) has been involved repeatedly in melanin-based pigmentation but it was thought to have few other physiological effects. However, recent pharmacological studies suggest that MC1R could regulate the aspects of immunity. We investigated whether variation at Mc1r underpins plumage colouration in the Eleonora's falcon. We also examined whether nestlings of the different morphs differed in their inflammatory response induced by phytohemagglutinin (PHA). Variation in colouration was due to a deletion of four amino acids at the Mc1r gene. Cellular immune response was morph specific. In males, but not in females, dark nestling mounted a lower PHA response than pale ones. Although correlative, our results raise the neglected possibility that MC1R has pleiotropic effects, suggesting a potential role of immune capacity and pathogen pressure on the maintenance of colour polymorphism in this species.
Resumo:
Although extended secondary prophylaxis with low-molecular-weight heparin was recently shown to be more effective than warfarin for cancer-related venous thromboembolism, its cost-effectiveness compared to traditional prophylaxis with warfarin is uncertain. We built a decision analytic model to evaluate the clinical and economic outcomes of a 6-month course of low-molecular-weight heparin or warfarin therapy in 65-year-old patients with cancer-related venous thromboembolism. We used probability estimates and utilities reported in the literature and published cost data. Using a US societal perspective, we compared strategies based on quality-adjusted life-years (QALYs) and lifetime costs. The incremental cost-effectiveness ratio of low-molecular-weight heparin compared with warfarin was 149,865 dollars/QALY. Low-molecular-weight heparin yielded a quality-adjusted life expectancy of 1.097 QALYs at the cost of 15,329 dollars. Overall, 46% (7108 dollars) of the total costs associated with low-molecular-weight heparin were attributable to pharmacy costs. Although the low-molecular-weigh heparin strategy achieved a higher incremental quality-adjusted life expectancy than the warfarin strategy (difference of 0.051 QALYs), this clinical benefit was offset by a substantial cost increment of 7,609 dollars. Cost-effectiveness results were sensitive to variation of the early mortality risks associated with low-molecular-weight heparin and warfarin and the pharmacy costs for low-molecular-weight heparin. Based on the best available evidence, secondary prophylaxis with low-molecular-weight heparin is more effective than warfarin for cancer-related venous thromboembolism. However, because of the substantial pharmacy costs of extended low-molecular-weight heparin prophylaxis in the US, this treatment is relatively expensive compared with warfarin.
Resumo:
We use basic probability theory and simple replicable electronic search experiments to evaluate some reported “myths” surrounding the origins and evolution of the QWERTY standard. The resulting evidence is strongly supportive of arguments put forward by Paul A. David (1985) and W. Brian Arthur (1989) that QWERTY was path dependent with its course of development strongly influenced by specific historical circumstances. The results also include the unexpected finding that QWERTY was as close to an optimal solution to a serious but transient problem as could be expected with the resources at the disposal of its designers in 1873.