989 resultados para stochastic load factor
Resumo:
The objective of the present study was to determine the prevalence of electrolyte disturbances in AIDS patients developing acute kidney injury in the hospital setting, as well as to determine whether such disturbances constitute a risk factor for nephrotoxic and ischemic injury. A prospective, observational cohort study was carried out. Hospitalized AIDS patients were evaluated for age; gender; coinfection with hepatitis; diabetes mellitus; hypertension; time since HIV seroconversion; CD4 count; HIV viral load; proteinuria; serum levels of creatinine, urea, sodium, potassium and magnesium; antiretroviral use; nephrotoxic drug use; sepsis; intensive care unit (ICU) admission, and the need for dialysis. Each of these characteristics was correlated with the development of acute kidney injury, with recovery of renal function and with survival. Fifty-four patients developed acute kidney injury: 72% were males, 59% had been HIV-infected for >5 years, 72% had CD4 counts <200 cells/mm³, 87% developed electrolyte disturbances, 33% recovered renal function, and 56% survived. ICU admission, dialysis, sepsis and hypomagnesemia were all significantly associated with nonrecovery of renal function and with mortality. Nonrecovery of renal function was significantly associated with hypomagnesemia, as was mortality in the multivariate analysis. The risks for nonrecovery of renal function and for death were 6.94 and 6.92 times greater, respectively, for patients with hypomagnesemia. In hospitalized AIDS patients, hypomagnesemia is a risk factor for nonrecovery of renal function and for in-hospital mortality. To determine whether hypomagnesemia is a determinant or simply a marker of critical illness, further studies involving magnesium supplementation in AIDS patients are warranted.
Resumo:
Common variants of the transcription factor 7-like 2 (TCF7L2) gene have been found to be associated with type 2 diabetes in different ethnic groups. The Japanese-Brazilian population has one of the highest prevalence rates of diabetes. Therefore, the aim of the present study was to assess whether two single-nucleotide polymorphisms (SNPs) of TCF7L2, rs7903146 and rs12255372, could predict the development of glucose intolerance in Japanese-Brazilians. In a population-based 7-year prospective study, we genotyped 222 individuals (72 males and 150 females, aged 56.2 ± 10.5 years) with normal glucose tolerance at baseline. In the study population, we found that the minor allele frequency was 0.05 for SNP rs7903146 and 0.03 for SNP rs12255372. No significant allele or genotype association with glucose intolerance incidence was found for either SNP. Haplotypes were constructed with these two SNPs and three haplotypes were defined: CG (frequency: 0.94), TT (frequency = 0.027) and TG (frequency = 0.026). None of the haplotypes provided evidence for association with the incidence of glucose intolerance. Despite no associations between incidence of glucose intolerance and SNPs of the TCF7L2 gene in Japanese-Brazilians, we found that carriers of the CT genotype for rs7903146 had significantly lower insulin levels 2 h after a 75-g glucose load than carriers of the CC genotype. In conclusion, in Japanese-Brazilians, a population with a high prevalence of type 2 diabetes, common TCF7L2 variants did not make major contributions to the incidence of glucose tolerance abnormalities.
Resumo:
Traditional psychometric theory and practice classify people according to broad ability dimensions but do not examine how these mental processes occur. Hunt and Lansman (1975) proposed a 'distributed memory' model of cognitive processes with emphasis on how to describe individual differences based on the assumption that each individual possesses the same components. It is in the quality of these components ~hat individual differences arise. Carroll (1974) expands Hunt's model to include a production system (after Newell and Simon, 1973) and a response system. He developed a framework of factor analytic (FA) factors for : the purpose of describing how individual differences may arise from them. This scheme is to be used in the analysis of psychometric tes ts . Recent advances in the field of information processing are examined and include. 1) Hunt's development of differences between subjects designated as high or low verbal , 2) Miller's pursuit of the magic number seven, plus or minus two, 3) Ferguson's examination of transfer and abilities and, 4) Brown's discoveries concerning strategy teaching and retardates . In order to examine possible sources of individual differences arising from cognitive tasks, traditional psychometric tests were searched for a suitable perceptual task which could be varied slightly and administered to gauge learning effects produced by controlling independent variables. It also had to be suitable for analysis using Carroll's f ramework . The Coding Task (a symbol substitution test) found i n the Performance Scale of the WISe was chosen. Two experiments were devised to test the following hypotheses. 1) High verbals should be able to complete significantly more items on the Symbol Substitution Task than low verbals (Hunt, Lansman, 1975). 2) Having previous practice on a task, where strategies involved in the task may be identified, increases the amount of output on a similar task (Carroll, 1974). J) There should be a sUbstantial decrease in the amount of output as the load on STM is increased (Miller, 1956) . 4) Repeated measures should produce an increase in output over trials and where individual differences in previously acquired abilities are involved, these should differentiate individuals over trials (Ferguson, 1956). S) Teaching slow learners a rehearsal strategy would improve their learning such that their learning would resemble that of normals on the ,:same task. (Brown, 1974). In the first experiment 60 subjects were d.ivided·into high and low verbal, further divided randomly into a practice group and nonpractice group. Five subjects in each group were assigned randomly to work on a five, seven and nine digit code throughout the experiment. The practice group was given three trials of two minutes each on the practice code (designed to eliminate transfer effects due to symbol similarity) and then three trials of two minutes each on the actual SST task . The nonpractice group was given three trials of two minutes each on the same actual SST task . Results were analyzed using a four-way analysis of variance . In the second experiment 18 slow learners were divided randomly into two groups. one group receiving a planned strategy practioe, the other receiving random practice. Both groups worked on the actual code to be used later in the actual task. Within each group subjects were randomly assigned to work on a five, seven or nine digit code throughout. Both practice and actual tests consisted on three trials of two minutes each. Results were analyzed using a three-way analysis of variance . It was found in t he first experiment that 1) high or low verbal ability by itself did not produce significantly different results. However, when in interaction with the other independent variables, a difference in performance was noted . 2) The previous practice variable was significant over all segments of the experiment. Those who received previo.us practice were able to score significantly higher than those without it. J) Increasing the size of the load on STM severely restricts performance. 4) The effect of repeated trials proved to be beneficial. Generally, gains were made on each successive trial within each group. S) In the second experiment, slow learners who were allowed to practice randomly performed better on the actual task than subjeots who were taught the code by means of a planned strategy. Upon analysis using the Carroll scheme, individual differences were noted in the ability to develop strategies of storing, searching and retrieving items from STM, and in adopting necessary rehearsals for retention in STM. While these strategies may benef it some it was found that for others they may be harmful . Temporal aspects and perceptual speed were also found to be sources of variance within individuals . Generally it was found that the largest single factor i nfluencing learning on this task was the repeated measures . What e~ables gains to be made, varies with individuals . There are environmental factors, specific abilities, strategy development, previous learning, amount of load on STM , perceptual and temporal parameters which influence learning and these have serious implications for educational programs .
Resumo:
Affiliation: Mark Daniel: Département de médecine sociale et préventive, Faculté de médecine, Université de Montréal et Centre de recherche du Centre hospitalier de l'Université de Montréal
Resumo:
We study the workings of the factor analysis of high-dimensional data using artificial series generated from a large, multi-sector dynamic stochastic general equilibrium (DSGE) model. The objective is to use the DSGE model as a laboratory that allow us to shed some light on the practical benefits and limitations of using factor analysis techniques on economic data. We explain in what sense the artificial data can be thought of having a factor structure, study the theoretical and finite sample properties of the principal components estimates of the factor space, investigate the substantive reason(s) for the good performance of di¤usion index forecasts, and assess the quality of the factor analysis of highly dissagregated data. In all our exercises, we explain the precise relationship between the factors and the basic macroeconomic shocks postulated by the model.
Resumo:
Avec les avancements de la technologie de l'information, les données temporelles économiques et financières sont de plus en plus disponibles. Par contre, si les techniques standard de l'analyse des séries temporelles sont utilisées, une grande quantité d'information est accompagnée du problème de dimensionnalité. Puisque la majorité des séries d'intérêt sont hautement corrélées, leur dimension peut être réduite en utilisant l'analyse factorielle. Cette technique est de plus en plus populaire en sciences économiques depuis les années 90. Étant donnée la disponibilité des données et des avancements computationnels, plusieurs nouvelles questions se posent. Quels sont les effets et la transmission des chocs structurels dans un environnement riche en données? Est-ce que l'information contenue dans un grand ensemble d'indicateurs économiques peut aider à mieux identifier les chocs de politique monétaire, à l'égard des problèmes rencontrés dans les applications utilisant des modèles standards? Peut-on identifier les chocs financiers et mesurer leurs effets sur l'économie réelle? Peut-on améliorer la méthode factorielle existante et y incorporer une autre technique de réduction de dimension comme l'analyse VARMA? Est-ce que cela produit de meilleures prévisions des grands agrégats macroéconomiques et aide au niveau de l'analyse par fonctions de réponse impulsionnelles? Finalement, est-ce qu'on peut appliquer l'analyse factorielle au niveau des paramètres aléatoires? Par exemple, est-ce qu'il existe seulement un petit nombre de sources de l'instabilité temporelle des coefficients dans les modèles macroéconomiques empiriques? Ma thèse, en utilisant l'analyse factorielle structurelle et la modélisation VARMA, répond à ces questions à travers cinq articles. Les deux premiers chapitres étudient les effets des chocs monétaire et financier dans un environnement riche en données. Le troisième article propose une nouvelle méthode en combinant les modèles à facteurs et VARMA. Cette approche est appliquée dans le quatrième article pour mesurer les effets des chocs de crédit au Canada. La contribution du dernier chapitre est d'imposer la structure à facteurs sur les paramètres variant dans le temps et de montrer qu'il existe un petit nombre de sources de cette instabilité. Le premier article analyse la transmission de la politique monétaire au Canada en utilisant le modèle vectoriel autorégressif augmenté par facteurs (FAVAR). Les études antérieures basées sur les modèles VAR ont trouvé plusieurs anomalies empiriques suite à un choc de la politique monétaire. Nous estimons le modèle FAVAR en utilisant un grand nombre de séries macroéconomiques mensuelles et trimestrielles. Nous trouvons que l'information contenue dans les facteurs est importante pour bien identifier la transmission de la politique monétaire et elle aide à corriger les anomalies empiriques standards. Finalement, le cadre d'analyse FAVAR permet d'obtenir les fonctions de réponse impulsionnelles pour tous les indicateurs dans l'ensemble de données, produisant ainsi l'analyse la plus complète à ce jour des effets de la politique monétaire au Canada. Motivée par la dernière crise économique, la recherche sur le rôle du secteur financier a repris de l'importance. Dans le deuxième article nous examinons les effets et la propagation des chocs de crédit sur l'économie réelle en utilisant un grand ensemble d'indicateurs économiques et financiers dans le cadre d'un modèle à facteurs structurel. Nous trouvons qu'un choc de crédit augmente immédiatement les diffusions de crédit (credit spreads), diminue la valeur des bons de Trésor et cause une récession. Ces chocs ont un effet important sur des mesures d'activité réelle, indices de prix, indicateurs avancés et financiers. Contrairement aux autres études, notre procédure d'identification du choc structurel ne requiert pas de restrictions temporelles entre facteurs financiers et macroéconomiques. De plus, elle donne une interprétation des facteurs sans restreindre l'estimation de ceux-ci. Dans le troisième article nous étudions la relation entre les représentations VARMA et factorielle des processus vectoriels stochastiques, et proposons une nouvelle classe de modèles VARMA augmentés par facteurs (FAVARMA). Notre point de départ est de constater qu'en général les séries multivariées et facteurs associés ne peuvent simultanément suivre un processus VAR d'ordre fini. Nous montrons que le processus dynamique des facteurs, extraits comme combinaison linéaire des variables observées, est en général un VARMA et non pas un VAR comme c'est supposé ailleurs dans la littérature. Deuxièmement, nous montrons que même si les facteurs suivent un VAR d'ordre fini, cela implique une représentation VARMA pour les séries observées. Alors, nous proposons le cadre d'analyse FAVARMA combinant ces deux méthodes de réduction du nombre de paramètres. Le modèle est appliqué dans deux exercices de prévision en utilisant des données américaines et canadiennes de Boivin, Giannoni et Stevanovic (2010, 2009) respectivement. Les résultats montrent que la partie VARMA aide à mieux prévoir les importants agrégats macroéconomiques relativement aux modèles standards. Finalement, nous estimons les effets de choc monétaire en utilisant les données et le schéma d'identification de Bernanke, Boivin et Eliasz (2005). Notre modèle FAVARMA(2,1) avec six facteurs donne les résultats cohérents et précis des effets et de la transmission monétaire aux États-Unis. Contrairement au modèle FAVAR employé dans l'étude ultérieure où 510 coefficients VAR devaient être estimés, nous produisons les résultats semblables avec seulement 84 paramètres du processus dynamique des facteurs. L'objectif du quatrième article est d'identifier et mesurer les effets des chocs de crédit au Canada dans un environnement riche en données et en utilisant le modèle FAVARMA structurel. Dans le cadre théorique de l'accélérateur financier développé par Bernanke, Gertler et Gilchrist (1999), nous approximons la prime de financement extérieur par les credit spreads. D'un côté, nous trouvons qu'une augmentation non-anticipée de la prime de financement extérieur aux États-Unis génère une récession significative et persistante au Canada, accompagnée d'une hausse immédiate des credit spreads et taux d'intérêt canadiens. La composante commune semble capturer les dimensions importantes des fluctuations cycliques de l'économie canadienne. L'analyse par décomposition de la variance révèle que ce choc de crédit a un effet important sur différents secteurs d'activité réelle, indices de prix, indicateurs avancés et credit spreads. De l'autre côté, une hausse inattendue de la prime canadienne de financement extérieur ne cause pas d'effet significatif au Canada. Nous montrons que les effets des chocs de crédit au Canada sont essentiellement causés par les conditions globales, approximées ici par le marché américain. Finalement, étant donnée la procédure d'identification des chocs structurels, nous trouvons des facteurs interprétables économiquement. Le comportement des agents et de l'environnement économiques peut varier à travers le temps (ex. changements de stratégies de la politique monétaire, volatilité de chocs) induisant de l'instabilité des paramètres dans les modèles en forme réduite. Les modèles à paramètres variant dans le temps (TVP) standards supposent traditionnellement les processus stochastiques indépendants pour tous les TVPs. Dans cet article nous montrons que le nombre de sources de variabilité temporelle des coefficients est probablement très petit, et nous produisons la première évidence empirique connue dans les modèles macroéconomiques empiriques. L'approche Factor-TVP, proposée dans Stevanovic (2010), est appliquée dans le cadre d'un modèle VAR standard avec coefficients aléatoires (TVP-VAR). Nous trouvons qu'un seul facteur explique la majorité de la variabilité des coefficients VAR, tandis que les paramètres de la volatilité des chocs varient d'une façon indépendante. Le facteur commun est positivement corrélé avec le taux de chômage. La même analyse est faite avec les données incluant la récente crise financière. La procédure suggère maintenant deux facteurs et le comportement des coefficients présente un changement important depuis 2007. Finalement, la méthode est appliquée à un modèle TVP-FAVAR. Nous trouvons que seulement 5 facteurs dynamiques gouvernent l'instabilité temporelle dans presque 700 coefficients.
Resumo:
Ma thèse est composée de trois chapitres reliés à l'estimation des modèles espace-état et volatilité stochastique. Dans le première article, nous développons une procédure de lissage de l'état, avec efficacité computationnelle, dans un modèle espace-état linéaire et gaussien. Nous montrons comment exploiter la structure particulière des modèles espace-état pour tirer les états latents efficacement. Nous analysons l'efficacité computationnelle des méthodes basées sur le filtre de Kalman, l'algorithme facteur de Cholesky et notre nouvelle méthode utilisant le compte d'opérations et d'expériences de calcul. Nous montrons que pour de nombreux cas importants, notre méthode est plus efficace. Les gains sont particulièrement grands pour les cas où la dimension des variables observées est grande ou dans les cas où il faut faire des tirages répétés des états pour les mêmes valeurs de paramètres. Comme application, on considère un modèle multivarié de Poisson avec le temps des intensités variables, lequel est utilisé pour analyser le compte de données des transactions sur les marchés financières. Dans le deuxième chapitre, nous proposons une nouvelle technique pour analyser des modèles multivariés à volatilité stochastique. La méthode proposée est basée sur le tirage efficace de la volatilité de son densité conditionnelle sachant les paramètres et les données. Notre méthodologie s'applique aux modèles avec plusieurs types de dépendance dans la coupe transversale. Nous pouvons modeler des matrices de corrélation conditionnelles variant dans le temps en incorporant des facteurs dans l'équation de rendements, où les facteurs sont des processus de volatilité stochastique indépendants. Nous pouvons incorporer des copules pour permettre la dépendance conditionnelle des rendements sachant la volatilité, permettant avoir différent lois marginaux de Student avec des degrés de liberté spécifiques pour capturer l'hétérogénéité des rendements. On tire la volatilité comme un bloc dans la dimension du temps et un à la fois dans la dimension de la coupe transversale. Nous appliquons la méthode introduite par McCausland (2012) pour obtenir une bonne approximation de la distribution conditionnelle à posteriori de la volatilité d'un rendement sachant les volatilités d'autres rendements, les paramètres et les corrélations dynamiques. Le modèle est évalué en utilisant des données réelles pour dix taux de change. Nous rapportons des résultats pour des modèles univariés de volatilité stochastique et deux modèles multivariés. Dans le troisième chapitre, nous évaluons l'information contribuée par des variations de volatilite réalisée à l'évaluation et prévision de la volatilité quand des prix sont mesurés avec et sans erreur. Nous utilisons de modèles de volatilité stochastique. Nous considérons le point de vue d'un investisseur pour qui la volatilité est une variable latent inconnu et la volatilité réalisée est une quantité d'échantillon qui contient des informations sur lui. Nous employons des méthodes bayésiennes de Monte Carlo par chaîne de Markov pour estimer les modèles, qui permettent la formulation, non seulement des densités a posteriori de la volatilité, mais aussi les densités prédictives de la volatilité future. Nous comparons les prévisions de volatilité et les taux de succès des prévisions qui emploient et n'emploient pas l'information contenue dans la volatilité réalisée. Cette approche se distingue de celles existantes dans la littérature empirique en ce sens que ces dernières se limitent le plus souvent à documenter la capacité de la volatilité réalisée à se prévoir à elle-même. Nous présentons des applications empiriques en utilisant les rendements journaliers des indices et de taux de change. Les différents modèles concurrents sont appliqués à la seconde moitié de 2008, une période marquante dans la récente crise financière.
Resumo:
To improve the welfare of the rural poor and keep them in the countryside, the government of Botswana has been spending 40% of the value of agricultural GDP on agricultural support services. But can investment make smallholder agriculture prosperous in such adverse conditions? This paper derives an answer by applying a two-output six-input stochastic translog distance function, with inefficiency effects and biased technical change to panel data for the 18 districts and the commercial agricultural sector, from 1979 to 1996 This model demonstrates that herds are the most important input, followed by draft power. land and seeds. Multilateral indices for technical change, technical efficiency and total factor productivity (TFP) show that the technology level of the commercial agricultural sector is more than six times that of traditional agriculture and that the gap has been increasing, due to technological regression in traditional agriculture and modest progress in commercial agriculture. Since the levels of efficiency are similar, the same patient is repeated by the TFP indices. This result highlights the policy dilemma of the trade-off between efficiency and equity objectives.
Resumo:
This paper considers the use of a discrete-time deadbeat control action on systems affected by noise. Variations on the standard controller form are discussed and comparisons are made with controllers in which noise rejection is a higher priority objective. Both load and random disturbances are considered in the system description, although the aim of the deadbeat design remains as a tailoring of reference input variations. Finally, the use of such a deadbeat action within a self-tuning control framework is shown to satisfy, under certain conditions, the self-tuning property, generally though only when an extended form of least-squares estimation is incorporated.
Resumo:
In this paper, a power management strategy (PMS) has been developed for the control of energy storage in a system subjected to loads of random duration. The PMS minimises the costs associated with the energy consumption of specific systems powered by a primary energy source and equipped with energy storage, under the assumption that the statistical distribution of load durations is known. By including the variability of the load in the cost function, it was possible to define the optimality criteria for the power flow of the storage. Numerical calculations have been performed obtaining the control strategies associated with the global minimum in energy costs, for a wide range of initial conditions of the system. The results of the calculations have been tested on a MATLAB/Simulink model of a rubber tyre gantry (RTG) crane equipped with a flywheel energy storage system (FESS) and subjected to a test cycle, which corresponds to the real operation of a crane in the Port of Felixstowe. The results of the model show increased energy savings and reduced peak power demand with respect to existing control strategies, indicating considerable potential savings for port operators in terms of energy and maintenance costs.
Resumo:
Introduction: Cytokines (IL-6, IL-10 and TNF-alpha) are increased after exhaustive exercise in the rat retroperitoneal (RPAT) and mesenteric adipose tissue (MEAT) pads. On the other hand, these cytokines show decreased expression in these depots in response to a chronic exercise protocol. However, the effect of exercise with overload combined with a short recovery period on pro-and anti-inflammatory cytokine expression is unknown. In the present study, we investigated the regulation of cytokine production in the adipose tissue of rats after an overtraining-inducing exercise protocol. Methods: Male Wistar rats were divided into four groups: Control (C), Trained (Tr), Overtrained (OT) and recovered overtrained (R). Cytokines (IL-6, TNF-alpha and IL-10) levels and Toll Like Receptor 4 (TLR4), Nuclear Factor kBBp65 (NF-kBp65), Hormone Sensitive Lipase (HSL) and, Perilipin protein expression were assessed in the adipose tissue. Furthermore, we analysed plasma lipid profile, insulin, testosterone, corticosterone and endotoxin levels, and liver triacylglycerol, cytokine content, as well as apolipoprotein B (apoB) and TLR4 expression in the liver. Results: OT and R groups exhibited reduced performance accompanied by lower testosterone and increased corticosterone and endotoxin levels when compared with the control and trained groups. IL-6 and IL-10 protein levels were increased in the adipose tissue of the group allowed to recover, in comparison with all the other studied groups. TLR-4 and NF-kBp65 were increased in this same group when compared with both control and trained groups. The protein expression of HSL was increased and that of Perilipin, decreased in the adipose in R in relation to the control. In addition, we found increased liver and serum TAG, along with reduced apoB protein expression and IL-6 and IL-10 levels in the of R in relation to the control and trained groups. Conclusion: In conclusion, we have shown that increases in pro-inflammatory cytokines in the adipose tissue after an overtraining protocol may be mediated via TLR-4 and NF-kBp65 signalling, leading to an inflammatory state in this tissue.
Resumo:
P>Apoptosis of macrophages infected with pathogenic mycobacteria is an alternative host defence capable of removing the environment supporting bacterial growth. In this work the influence of virulence and bacterial load on apoptosis of alveolar macrophages during the initial phase of infection by Mycobacterium bovis was investigated. BALB/c mice were infected intratracheally with high or low doses of the virulent (ATCC19274) or attenuated (bacillus Calmette-Guerin Moreau) strains of M. bovis. The frequency of macrophage apoptosis, the growth of mycobacteria in macrophages, and the in situ levels of the cytokines tumour necrosis factor-alpha (TNF-alpha), interleukin-10 (IL-10) and IL-12 and of the anti-apoptotic protein Bcl-2 were measured at day 3 and day 7 post-infection. An increase of macrophage apoptosis was observed after infection with both strains but the virulent strain induced less apoptosis than the attenuated strain. On the 3rd day after infection with the virulent strain macrophage apoptosis was reduced in the high-dose group, while on the 7th day post-infection macrophage apoptosis was reduced in the low-dose group. Inhibition of apoptosis was correlated with increased production of IL-10, reduced production of TNF-alpha and increased production of Bcl-2. In addition, the production of IL-12 was reduced at points where the lowest levels of macrophage apoptosis were observed. Our results indicate that virulent mycobacteria are able to modulate macrophage apoptosis to an extent dependent on the intracellular bacterial burden, which benefits its intracellular growth and dissemination to adjacent cells.
Resumo:
Background and Objective: Inflammatory cytokines such as tumor necrosis factor-alpha are involved in the pathogenesis of periodontal diseases. A high between-subject variation in the level of tumor necrosis factor-alpha mRNA has been verified, which may be a result of genetic polymorphisms and/or the presence of periodontopathogens such as Porphyromonas gingivalis, Tannerella forsythia, Treponema denticola (called the red complex) and Aggregatibacter actinomycetemcomitans. In this study, we investigated the effect of the tumor necrosis factor-alpha (TNFA) -308G/A gene polymorphism and of periodontopathogens on the tumor necrosis factor-alpha levels in the periodontal tissues of nonsmoking patients with chronic periodontitis (n = 127) and in control subjects (n = 177). Material and Methods: The TNFA-308G/A single nucleotide polymorphism was investigated using polymerase chain reaction-restriction fragment length polymorphism analysis, whereas the tumor necrosis factor-alpha levels and the periodontopathogen load were determined using real-time polymerase chain reaction. Results: No statistically significant differences were found in the frequency of the TNFA-308 single nucleotide polymorphism in control and chronic periodontitis groups, in spite of the higher frequency of the A allele in the chronic periodontitis group. The concomitant analyses of genotypes and periodontopathogens demonstrated that TNFA-308 GA/AA genotypes and the red-complex periodontopathogens were independently associated with increased levels of tumor necrosis factor-alpha in periodontal tissues, and no additive effect was seen when both factors were present. P. gingivalis, T. forsythia and T. denticola counts were positively correlated with the level of tumor necrosis factor-alpha. TNFA-308 genotypes were not associated with the periodontopathogen detection odds or with the bacterial load. Conclusion: Our results demonstrate that the TNFA-308 A allele and red-complex periodontopathogens are independently associated with increased levels of tumor necrosis factor-alpha in diseased tissues of nonsmoking chronic periodontitis patients and consequently are potentially involved in determining the disease outcome.
Resumo:
Multiple sclerosis (MS) is a progressive inflammatory and/or demyelinating disease of the human central nervous system (CNS). Most of the knowledge about the pathogenesis of MS has been derived from murine models, such as experimental autoimmune encephalomyelitis and vital encephalomyelitis. Here, we infected female C57BL/6 mice with a neurotropic strain of the mouse hepatitis virus (MHV-59A) to evaluate whether treatment with the multifunctional antioxidant tempol (4-hydroxy-2,2,6,6-tetramethyl-1-piperidinyloxy) affects the ensuing encephalomyelitis. In untreated animals, neurological symptoms developed quickly: 90% of infected mice died 10 days after virus inoculation and the few survivors presented neurological deficits. Treatment with tempol (24 mg/kg, ip, two doses on the first day and daily doses for 7 days plus 2 mM tempol in the drinking water ad libitum) profoundly altered the disease outcome: neurological symptoms were attenuated, mouse survival increased up to 70%, and half of the survivors behaved as normal mice. Not Surprisingly, tempol substantially preserved the integrity of the CNS, including the blood-brain barrier. Furthermore, treatment with tempol decreased CNS vital titers, macrophage and T lymphocyte infiltration, and levels of markers of inflammation, such as expression of inducible nitric oxide synthase, transcription of tumor necrosis factor-alpha and interferon-gamma, and protein nitration. The results indicate that tempol ameliorates murine viral encephalomyelitis by altering the redox status of the infectious environment that contributes to an attenuated CNS inflammatory response. overall, our study supports the development of therapeutic strategies based on nitroxides to manage neuroinflammatory diseases, including MS. (C) 2009 Elsevier Inc. All rights reserved.
Resumo:
This Paper Tackles the Problem of Aggregate Tfp Measurement Using Stochastic Frontier Analysis (Sfa). Data From Penn World Table 6.1 are Used to Estimate a World Production Frontier For a Sample of 75 Countries Over a Long Period (1950-2000) Taking Advantage of the Model Offered By Battese and Coelli (1992). We Also Apply the Decomposition of Tfp Suggested By Bauer (1990) and Kumbhakar (2000) to a Smaller Sample of 36 Countries Over the Period 1970-2000 in Order to Evaluate the Effects of Changes in Efficiency (Technical and Allocative), Scale Effects and Technical Change. This Allows Us to Analyze the Role of Productivity and Its Components in Economic Growth of Developed and Developing Nations in Addition to the Importance of Factor Accumulation. Although not Much Explored in the Study of Economic Growth, Frontier Techniques Seem to Be of Particular Interest For That Purpose Since the Separation of Efficiency Effects and Technical Change Has a Direct Interpretation in Terms of the Catch-Up Debate. The Estimated Technical Efficiency Scores Reveal the Efficiency of Nations in the Production of Non Tradable Goods Since the Gdp Series Used is Ppp-Adjusted. We Also Provide a Second Set of Efficiency Scores Corrected in Order to Reveal Efficiency in the Production of Tradable Goods and Rank Them. When Compared to the Rankings of Productivity Indexes Offered By Non-Frontier Studies of Hall and Jones (1996) and Islam (1995) Our Ranking Shows a Somewhat More Intuitive Order of Countries. Rankings of the Technical Change and Scale Effects Components of Tfp Change are Also Very Intuitive. We Also Show That Productivity is Responsible For Virtually All the Differences of Performance Between Developed and Developing Countries in Terms of Rates of Growth of Income Per Worker. More Important, We Find That Changes in Allocative Efficiency Play a Crucial Role in Explaining Differences in the Productivity of Developed and Developing Nations, Even Larger Than the One Played By the Technology Gap