888 resultados para the one and the multiple
Resumo:
CD6 has recently been identified and validated as risk gene for multiple sclerosis (MS), based on the association of a single nucleotide polymorphism (SNP), rs17824933, located in intron 1. CD6 is a cell surface scavenger receptor involved in T-cell activation and proliferation, as well as in thymocyte differentiation. In this study, we performed a haptag SNP screen of the CD6 gene locus using a total of thirteen tagging SNPs, of which three were non-synonymous SNPs, and replicated the recently reported GWAS SNP rs650258 in a Spanish-Basque collection of 814 controls and 823 cases. Validation of the six most strongly associated SNPs was performed in an independent collection of 2265 MS patients and 2600 healthy controls. We identified association of haplotypes composed of two non-synonymous SNPs [rs11230563 (R225W) and rs2074225 (A257V)] in the 2(nd) SRCR domain with susceptibility to MS (P max(T) permutation = 1×10(-4)). The effect of these haplotypes on CD6 surface expression and cytokine secretion was also tested. The analysis showed significantly different CD6 expression patterns in the distinct cell subsets, i.e. - CD4(+) naïve cells, P = 0.0001; CD8(+) naïve cells, P<0.0001; CD4(+) and CD8(+) central memory cells, P = 0.01 and 0.05, respectively; and natural killer T (NKT) cells, P = 0.02; with the protective haplotype (RA) showing higher expression of CD6. However, no significant changes were observed in natural killer (NK) cells, effector memory and terminally differentiated effector memory T cells. Our findings reveal that this new MS-associated CD6 risk haplotype significantly modifies expression of CD6 on CD4(+) and CD8(+) T cells.
Resumo:
The code STATFLUX, implementing a new and simple statistical procedure for the calculation of transfer coefficients in radionuclide transport to animals and plants, is proposed. The method is based on the general multiple-compartment model, which uses a system of linear equations involving geometrical volume considerations. Flow parameters were estimated by employing two different least-squares procedures: Derivative and Gauss-Marquardt methods, with the available experimental data of radionuclide concentrations as the input functions of time. The solution of the inverse problem, which relates a given set of flow parameter with the time evolution of concentration functions, is achieved via a Monte Carlo Simulation procedure.Program summaryTitle of program: STATFLUXCatalogue identifier: ADYS_v1_0Program summary URL: http://cpc.cs.qub.ac.uk/summaries/ADYS_v1_0Program obtainable from: CPC Program Library, Queen's University of Belfast, N. IrelandLicensing provisions: noneComputer for which the program is designed and others on which it has been tested: Micro-computer with Intel Pentium III, 3.0 GHzInstallation: Laboratory of Linear Accelerator, Department of Experimental Physics, University of São Paulo, BrazilOperating system: Windows 2000 and Windows XPProgramming language used: Fortran-77 as implemented in Microsoft Fortran 4.0. NOTE: Microsoft Fortran includes non-standard features which are used in this program. Standard Fortran compilers such as, g77, f77, ifort and NAG95, are not able to compile the code and therefore it has not been possible for the CPC Program Library to test the program.Memory, required to execute with typical data: 8 Mbytes of RAM memory and 100 MB of Hard disk memoryNo. of bits in a word: 16No. of lines in distributed program, including test data, etc.: 6912No. of bytes in distributed Program, including test data, etc.: 229 541Distribution format: tar.gzNature of the physical problem: the investigation of transport mechanisms for radioactive substances, through environmental pathways, is very important for radiological protection of populations. One such pathway, associated with the food chain, is the grass-animal-man sequence. The distribution of trace elements in humans and laboratory animals has been intensively studied over the past 60 years [R.C. Pendlenton, C.W. Mays, R.D. Lloyd, A.L. Brooks, Differential accumulation of iodine-131 from local fallout in people and milk, Health Phys. 9 (1963) 1253-1262]. In addition, investigations on the incidence of cancer in humans, and a possible causal relationship to radioactive fallout, have been undertaken [E.S. Weiss, M.L. Rallison, W.T. London, W.T. Carlyle Thompson, Thyroid nodularity in southwestern Utah school children exposed to fallout radiation, Amer. J. Public Health 61 (1971) 241-249; M.L. Rallison, B.M. Dobyns, F.R. Keating, J.E. Rall, F.H. Tyler, Thyroid diseases in children, Amer. J. Med. 56 (1974) 457-463; J.L. Lyon, M.R. Klauber, J.W. Gardner, K.S. Udall, Childhood leukemia associated with fallout from nuclear testing, N. Engl. J. Med. 300 (1979) 397-402]. From the pathways of entry of radionuclides in the human (or animal) body, ingestion is the most important because it is closely related to life-long alimentary (or dietary) habits. Those radionuclides which are able to enter the living cells by either metabolic or other processes give rise to localized doses which can be very high. The evaluation of these internally localized doses is of paramount importance for the assessment of radiobiological risks and radiological protection. The time behavior of trace concentration in organs is the principal input for prediction of internal doses after acute or chronic exposure. The General Multiple-Compartment Model (GMCM) is the powerful and more accepted method for biokinetical studies, which allows the calculation of concentration of trace elements in organs as a function of time, when the flow parameters of the model are known. However, few biokinetics data exist in the literature, and the determination of flow and transfer parameters by statistical fitting for each system is an open problem.Restriction on the complexity of the problem: This version of the code works with the constant volume approximation, which is valid for many situations where the biological half-live of a trace is lower than the volume rise time. Another restriction is related to the central flux model. The model considered in the code assumes that exist one central compartment (e.g., blood), that connect the flow with all compartments, and the flow between other compartments is not included.Typical running time: Depends on the choice for calculations. Using the Derivative Method the time is very short (a few minutes) for any number of compartments considered. When the Gauss-Marquardt iterative method is used the calculation time can be approximately 5-6 hours when similar to 15 compartments are considered. (C) 2006 Elsevier B.V. All rights reserved.
Resumo:
The Multiple Pheromone Ant Clustering Algorithm (MPACA) models the collective behaviour of ants to find clusters in data and to assign objects to the most appropriate class. It is an ant colony optimisation approach that uses pheromones to mark paths linking objects that are similar and potentially members of the same cluster or class. Its novelty is in the way it uses separate pheromones for each descriptive attribute of the object rather than a single pheromone representing the whole object. Ants that encounter other ants frequently enough can combine the attribute values they are detecting, which enables the MPACA to learn influential variable interactions. This paper applies the model to real-world data from two domains. One is logistics, focusing on resource allocation rather than the more traditional vehicle-routing problem. The other is mental-health risk assessment. The task for the MPACA in each domain was to predict class membership where the classes for the logistics domain were the levels of demand on haulage company resources and the mental-health classes were levels of suicide risk. Results on these noisy real-world data were promising, demonstrating the ability of the MPACA to find patterns in the data with accuracy comparable to more traditional linear regression models. © 2013 Polish Information Processing Society.
Resumo:
In the assignment game of Shapley and Shubik [Shapley, L.S., Shubik, M., 1972. The assignment game. I. The core, International journal of Game Theory 1, 11-130] agents are allowed to form one partnership at most. That paper proves that, in the context of firms and workers, given two stable payoffs for the firms there is a stable payoff which gives each firm the larger of the two amounts and also one which gives each of them the smaller amount. Analogous result applies to the workers. Sotomayor [Sotomayor, M., 1992. The multiple partners game. In: Majumdar, M. (Ed.), Dynamics and Equilibrium: Essays in Honor to D. Gale. Mcmillian, pp. 322-336] extends this analysis to the case where both types of agents may form more than one partnership and an agent`s payoff is multi-dimensional. Instead, this note concentrates in the total payoff of the agents. It is then proved the rather unexpected result that again the maximum of any pair of stable payoffs for the firms is stable but the minimum need not be, even if we restrict the multiplicity of partnerships to one of the sides. (C) 2009 Elsevier B.V. All rights reserved.
Resumo:
Starting with an initial price vector, prices are adjusted in order to eliminate the excess demand and at the same time to keep the transfers to the sellers as low as possible. In each step of the auction, to which set of sellers should those transfers be made is the key issue in the description of the algorithm. We assume additively separable utilities and introduce a novel distinction by considering multiple sellers owing multiple identical objects and multiple buyers with an exogenously defined quota, consuming more than one object but at most one unit of a seller`s good and having multi-dimensional payoffs. This distinction induces a necessarily more complicated construction of the over-demanded sets than the constructions of these sets for the other assignment games. For this approach, our mechanism yields the buyer-optimal competitive equilibrium payoff, which equals the buyer-optimal stable payoff. The symmetry of the model allows to getting the seller-optimal stable payoff and the seller-optimal competitive equilibrium payoff can then be also derived.
Resumo:
The phospholipases A(1) (PLA(1)s) from the venom of the social wasp Polybia paulista occur as a mixture of different molecular forms. To characterize the molecular origin of these structural differences, an experimental strategy was planned combining the isolation of the pool of PLAs from the wasp venom with proteomic approaches by using 2-D, MALDI-TOF-TOF MS and classical protocols of protein chemistry, which included N- and C-terminal sequencing. The existence of an intact form of PLA(1) and seven truncated forms was identified, apparently originating from controlled proteolysis of the intact protein; in addition to this, four of these truncated forms also presented carbohydrates attached to their molecules. Some of these forms are immunoreactive to specific-IgE, while others are not. These observations permit to raise the hypothesis that naturally occurring proteolysis of PLA(1), combined with protein glycosylation may create a series of different molecular forms of these proteins, with different levels of allergenicity. Two forms of PLA(2)s, apparently related to each other, were also identified; however, it was not possible to determine the molecular origin of the differences between both forms, except that one of them was glycosylated. None of these forms were immunoreactive to human specific IgE.
Resumo:
The outflow-concentration-time profiles for lignocaine (lidocaine) and its metabolites have been measured after bolus impulse administration of [C-14]lignocaine into the perfused rat liver. Livers from female Sprague-Dawley rats were perfused in a once-through fashion with red-blood-cell-free Krebs-Henseleit buffer containing 0 or 2% bovine serum albumin. Perfusate flow rates of 20 and 30 mL min(-1) were used and both normal and retrograde flow directions were employed. Significant amounts of metabolite were detected in the effluent perfusate soon after lignocaine injection. The early appearance of metabolite contributed to bimodal outflow profiles observed for total C-14 radioactivity. The lignocaine outflow profiles were well characterized by the two-compartment dispersion model, with efflux rate << influx rate. The profiles for lignocaine metabolites were also characterized in terms of a simplified two-compartment dispersion model. Lignocaine was found to be extensively metabolized under the experimental conditions with the hepatic availability ranging between 0.09 and 0.18. Generally lignocaine and metabolite availability showed no significant change with alterations in perfusate flow rate from 20 to 30 mt min(-1) or protein content from 0 to 2%. A significant increase in lignocaine availability occurred when 1200 mu M unlabelled lignocaine was added to the perfusate. Solute mean transit times generally decreased with increasing flow rate and with increasing perfusate protein content. The results confirm that lignocaine pharmacokinetics in the liver closely follow the predictions of the well-stirred model. The increase in lignocaine availability when 1200 mu M unlabelled lignocaine was added to the perfusate is consistent with saturation of the hydroxylation metabolic pathways of lignocaine metabolism.
Resumo:
Low participation at the employee or worksite level limits the potential public health impact of worksite-based interventions. Ecological models suggest that multiple levels of influence operate to determine participation patterns in worksite health promotion programs. Most investigations into the determinants of low participation study the intrapersonal, interpersonal, and institutional influences on employee participation. Community- and policy-level influences have not received attention, nor has consideration been given to worksite-level participation issues. The purpose of this article is to discuss one macrosocial theoretical perspective—political economy of health—that may guide practitioners and researchers interested in addressing the community- and policy-level determinants of participation in worksite health promotion programs. The authors argue that using theory to investigate the full spectrum of determinants offers a more complete range of intervention and research options for maximizing employee and worksite levels of participation.
Resumo:
The purpose of this study was to determine the pharmacokinetics of [C-14]diclofenac, [C-14]salicylate and [H-3]clonidine using a single pass rat head perfusion preparation. The head was perfused with 3-[N-morpholino] propane-sulfonic acid-buffered Ringer's solution. Tc-99m-red blood cells and a drug were injected in a bolus into the internal carotid artery and collected from the posterior facial vein over 28 min. A two-barrier stochastic organ model was used to estimate the statistical moments of the solutes. Plasma, interstitial and cellular distribution volumes for the solutes ranged from 1.0 mL (diclofenac) to 1.6 mL (salicylate), 2.0 mL (diclofenac) to 4.2 mL (water) and 3.9 mL (salicylate) to 20.9 mL (diclofenac), respectively. A comparison of these volumes to water indicated some exclusion of the drugs from the interstitial space and salicylate from the cellular space. Permeability-surface area (PS) products calculated from plasma to interstitial fluid permeation clearances (CLPI) (range 0.02-0.40 mL s(-1)) and fractions of solute unbound in the perfusate were in the order: diclofenac>salicylate >clonidine>sucrose (from 41.8 to 0.10 mL s(-1)). The slow efflux of diclofenac, compared with clonidine and salicylate, may be related to its low average unbound fraction in the cells. This work accounts for the tail of disposition curves in describing pharmacokinetics in the head.
Resumo:
A multiple-partners assignment game with heterogeneous sales and multiunit demands consists of a set of sellers that own a given number of indivisible units of (potentially many different) goods and a set of buyers who value those units and want to buy at most an exogenously fixed number of units. We define a competitive equilibrium for this generalized assignment game and prove its existence by using only linear programming. In particular, we show how to compute equilibrium price vectors from the solutions of the dual linear program associated to the primal linear program defined to find optimal assignments. Using only linear programming tools, we also show (i) that the set of competitive equilibria (pairs of price vectors and assignments) has a Cartesian product structure: each equilibrium price vector is part of a competitive equilibrium with all optimal assignments, and vice versa; (ii) that the set of (restricted) equilibrium price vectors has a natural lattice structure; and (iii) how this structure is translated into the set of agents' utilities that are attainable at equilibrium.
Resumo:
A comparision of the local effects of the basis set superposition error (BSSE) on the electron densities and energy components of three representative H-bonded complexes was carried out. The electron densities were obtained with Hartee-Fock and density functional theory versions of the chemical Hamiltonian approach (CHA) methodology. It was shown that the effects of the BSSE were common for all complexes studied. The electron density difference maps and the chemical energy component analysis (CECA) analysis confirmed that the local effects of the BSSE were different when diffuse functions were present in the calculations
Resumo:
A conceptually new approach is introduced for the decomposition of the molecular energy calculated at the density functional theory level of theory into sum of one- and two-atomic energy components, and is realized in the "fuzzy atoms" framework. (Fuzzy atoms mean that the three-dimensional physical space is divided into atomic regions having no sharp boundaries but exhibiting a continuous transition from one to another.) The new scheme uses the new concept of "bond order density" to calculate the diatomic exchange energy components and gives them unexpectedly close to the values calculated by the exact (Hartree-Fock) exchange for the same Kohn-Sham orbitals
Resumo:
This article introduces a new interface for T-Coffee, a consistency-based multiple sequence alignment program. This interface provides an easy and intuitive access to the most popular functionality of the package. These include the default T-Coffee mode for protein and nucleic acid sequences, the M-Coffee mode that allows combining the output of any other aligners, and template-based modes of T-Coffee that deliver high accuracy alignments while using structural or homology derived templates. These three available template modes are Expresso for the alignment of protein with a known 3D-Structure, R-Coffee to align RNA sequences with conserved secondary structures and PSI-Coffee to accurately align distantly related sequences using homology extension. The new server benefits from recent improvements of the T-Coffee algorithm and can align up to 150 sequences as long as 10,000 residues and is available from both http://www.tcoffee.org and its main mirror http://tcoffee.crg.cat.
Resumo:
This article introduces a new interface for T-Coffee, a consistency-based multiple sequence alignment program. This interface provides an easy and intuitive access to the most popular functionality of the package. These include the default T-Coffee mode for protein and nucleic acid sequences, the M-Coffee mode that allows combining the output of any other aligners, and template-based modes of T-Coffee that deliver high accuracy alignments while using structural or homology derived templates. These three available template modes are Expresso for the alignment of protein with a known 3D-Structure, R-Coffee to align RNA sequences with conserved secondary structures and PSI-Coffee to accurately align distantly related sequences using homology extension. The new server benefits from recent improvements of the T-Coffee algorithm and can align up to 150 sequences as long as 10 000 residues and is available from both http://www.tcoffee.org and its main mirror http://tcoffee.crg.cat.
Resumo:
Alfred Schütz original contribution to the social sciences refers to his analysis of the structure of the "life-world". This article aims to invigorate interest in the work of this author, little known in the field of health psychology. Key concepts of Schütz' approach will be presented in relation to their potential interest to the understanding of the experience of illness. In particular, we develop the main characteristics of the everyday life and its cognitive style, that is, its finite province of meaning. We propose to adopt this notion to define the experience of chronic or serious illness when the individual is confronted to the medical world. By articulating this analysis with literature in health psychology, we argue that Schütz's perspective brings useful insight to the field, namely because of its ability to study meaning constructions by overcoming the trap of solipsism by embracing intersubjectivity. The article concludes by outlining both, the limitations and research perspectives brought by this phenomenological analysis of the experiences of health and illness.