28 resultados para T helper subsets
Resumo:
Estudi realitzat a partir d’una estada al Institut de Génétique Moléculaire de Montpellier, França, entre 2010 i 2012. En aquest projecte s’ha avaluat les avantatges dels vectors adenovirals canins tipus 2 (CAV2) com a vectors de transferència gènica al sistema nerviós central (SNC) en un model primat no-humà i en un model caní del síndrome de Sly (mucopolisacaridosis tipus 7, MPS VII), malaltia monogènica que cursa amb neurodegeneració. En una primera part del projecte s’ha avaluat la biodistribució, l’eficàcia i la durada de l’expressió del transgen en un model primat no humà, (Microcebus murinus). Com ha vector s’ha utilitzat un CAV2 de primera generació que expressa la proteïna verda fluorescent (CAVGFP). Els resultats aportats en aquesta memòria demostren que en primats no humans, com en d’altres espècies testades anteriorment per l’equip de l’EJ Kremer, la injecció intracerebral de CAV2 resulta en una extensa transducció del SNC, siguent les neurones i els precursors neuronals les cèl•lules preferencialment transduïdes. Els vectors canins, servint-se de vesícules intracel•lulars són transportats, majoritàriament, des de les sinapsis cap al soma neuronal, aquest transport intracel•lular permet una extensa transducció del SNC a partir d’una única injecció intracerebral dels vectors virals. En una segona part d’aquest projecte s’ha avaluat l’ús terapèutic dels CAV2. S’ha injectat un vector helper-dependent que expressa el gen la b-glucuronidasa i el gen de la proteïna verda fluorescent (HD-RIGIE), en el SNC del model caní del síndrome de Sly (MPS VII). La biodistribució i la eficàcia terapèutica han estat avaluades. Els nivells d’activitat enzimàtica en animals malalts injectats amb el vector terapèutic va arribar a valors similars als dels animals no afectes. A més a més s’ha observat una reducció en la quantitat dels GAGs acumulats en les cèl•lules dels animals malalts tractats amb el vector terapèutic, demostrant la potencialitat terapèutica dels CAV2 per a malalties que afecten al SNC. Els resultats aportats en aquest treball ens permeten dir que els CAV2 són unes bones eines terapèutiques per al tractament de malalties que afecten al SNC.
Resumo:
We present a polyhedral framework for establishing general structural properties on optimal solutions of stochastic scheduling problems, where multiple job classes vie for service resources: the existence of an optimal priority policy in a given family, characterized by a greedoid (whose feasible class subsets may receive higher priority), where optimal priorities are determined by class-ranking indices, under restricted linear performance objectives (partial indexability). This framework extends that of Bertsimas and Niño-Mora (1996), which explained the optimality of priority-index policies under all linear objectives (general indexability). We show that, if performance measures satisfy partial conservation laws (with respect to the greedoid), which extend previous generalized conservation laws, then the problem admits a strong LP relaxation over a so-called extended greedoid polytope, which has strong structural and algorithmic properties. We present an adaptive-greedy algorithm (which extends Klimov's) taking as input the linear objective coefficients, which (1) determines whether the optimal LP solution is achievable by a policy in the given family; and (2) if so, computes a set of class-ranking indices that characterize optimal priority policies in the family. In the special case of project scheduling, we show that, under additional conditions, the optimal indices can be computed separately for each project (index decomposition). We further apply the framework to the important restless bandit model (two-action Markov decision chains), obtaining new index policies, that extend Whittle's (1988), and simple sufficient conditions for their validity. These results highlight the power of polyhedral methods (the so-called achievable region approach) in dynamic and stochastic optimization.
Resumo:
We present a polyhedral framework for establishing general structural properties on optimal solutions of stochastic scheduling problems, where multiple job classes vie for service resources: the existence of an optimal priority policy in a given family, characterized by a greedoid(whose feasible class subsets may receive higher priority), where optimal priorities are determined by class-ranking indices, under restricted linear performance objectives (partial indexability). This framework extends that of Bertsimas and Niño-Mora (1996), which explained the optimality of priority-index policies under all linear objectives (general indexability). We show that, if performance measures satisfy partial conservation laws (with respect to the greedoid), which extend previous generalized conservation laws, then theproblem admits a strong LP relaxation over a so-called extended greedoid polytope, which has strong structural and algorithmic properties. We present an adaptive-greedy algorithm (which extends Klimov's) taking as input the linear objective coefficients, which (1) determines whether the optimal LP solution is achievable by a policy in the given family; and (2) if so, computes a set of class-ranking indices that characterize optimal priority policies in the family. In the special case of project scheduling, we show that, under additional conditions, the optimal indices can be computed separately for each project (index decomposition). We further apply the framework to the important restless bandit model (two-action Markov decision chains), obtaining new index policies, that extend Whittle's (1988), and simple sufficient conditions for their validity. These results highlight the power of polyhedral methods (the so-called achievable region approach) in dynamic and stochastic optimization.
Resumo:
The network choice revenue management problem models customers as choosing from an offer-set, andthe firm decides the best subset to offer at any given moment to maximize expected revenue. The resultingdynamic program for the firm is intractable and approximated by a deterministic linear programcalled the CDLP which has an exponential number of columns. However, under the choice-set paradigmwhen the segment consideration sets overlap, the CDLP is difficult to solve. Column generation has beenproposed but finding an entering column has been shown to be NP-hard. In this paper, starting with aconcave program formulation based on segment-level consideration sets called SDCP, we add a class ofconstraints called product constraints, that project onto subsets of intersections. In addition we proposea natural direct tightening of the SDCP called ?SDCP, and compare the performance of both methodson the benchmark data sets in the literature. Both the product constraints and the ?SDCP method arevery simple and easy to implement and are applicable to the case of overlapping segment considerationsets. In our computational testing on the benchmark data sets in the literature, SDCP with productconstraints achieves the CDLP value at a fraction of the CPU time taken by column generation and webelieve is a very promising approach for quickly approximating CDLP when segment consideration setsoverlap and the consideration sets themselves are relatively small.
Resumo:
This paper presents a general equilibrium model of money demand wherethe velocity of money changes in response to endogenous fluctuations in the interest rate. The parameter space can be divided into two subsets: one where velocity is constant and equal to one as in cash-in-advance models, and another one where velocity fluctuates as in Baumol (1952). Despite its simplicity, in terms of paramaters to calibrate, the model performs surprisingly well. In particular, it approximates the variability of money velocity observed in the U.S. for the post-war period. The model is then used to analyze the welfare costs of inflation under uncertainty. This application calculates the errors derived from computing the costs of inflation with deterministic models. It turns out that the size of this difference is small, at least for the levels of uncertainty estimated for the U.S. economy.
Resumo:
The reelin gene encodes an extracellular protein that is crucial for neuronal migration in laminated brain regions. To gain insights into the functions of Reelin, we performed high-resolution in situ hybridization analyses to determine the pattern of reelin expression in the developing forebrain of the mouse. We also performed double-labeling studies with several markers, including calcium-binding proteins, GAD65/67, and neuropeptides, to characterize the neuronal subsets that express reelin transcripts. reelinexpression was detected at embryonic day 10 and later in the forebrain, with a distribution that is consistent with the prosomeric model of forebrain regionalization. In the diencephalon, expression was restricted to transverse and longitudinal domains that delineated boundaries between neuromeres. During embryogenesis,reelin was detected in the cerebral cortex in Cajal-Retzius cells but not in the GABAergic neurons of layer I. At prenatal stages, reelin was also expressed in the olfactory bulb, and striatum and in restricted nuclei in the ventral telencephalon, hypothalamus, thalamus, and pretectum. At postnatal stages, reelin transcripts gradually disappeared from Cajal-Retzius cells, at the same time as they appeared in subsets of GABAergic neurons distributed throughout neocortical and hippocampal layers. In other telencephalic and diencephalic regions,reelin expression decreased steadily during the postnatal period. In the adult, there was prominent expression in the olfactory bulb and cerebral cortex, where it was restricted to subsets of GABAergic interneurons that co-expressed calbindin, calretinin, neuropeptide Y, and somatostatin. This complex pattern of cellular and regional expression is consistent with Reelin having multiple roles in brain development and adult brain function.
Resumo:
Helping behavior is any intentional behavior that benefits another living being or group (Hogg & Vaughan, 2010). People tend to underestimate the probability that others will comply with their direct requests for help (Flynn & Lake, 2008). This implies that when they need help, they will assess the probability of getting it (De Paulo, 1982, cited in Flynn & Lake, 2008) and then they will tend to estimate one that is actually lower than the real chance, so they may not even consider worth asking for it. Existing explanations for this phenomenon attribute it to a mistaken cost computation by the help seeker, who will emphasize the instrumental cost of “saying yes”, ignoring that the potential helper also needs to take into account the social cost of saying “no”. And the truth is that, especially in face-to-face interactions, the discomfort caused by refusing to help can be very high. In short, help seekers tend to fail to realize that it might be more costly to refuse to comply with a help request rather than accepting. A similar effect has been observed when estimating trustworthiness of people. Fetchenhauer and Dunning (2010) showed that people also tend to underestimate it. This bias is reduced when, instead of asymmetric feedback (getting feedback only when deciding to trust the other person), symmetric feedback (always given) was provided. This cause could as well be applicable to help seeking as people only receive feedback when they actually make their request but not otherwise. Fazio, Shook, and Eiser (2004) studied something that could be reinforcing these outcomes: Learning asymmetries. By means of a computer game called BeanFest, they showed that people learn better about negatively valenced objects (beans in this case) than about positively valenced ones. This learning asymmetry esteemed from “information gain being contingent on approach behavior” (p. 293), which could be identified with what Fetchenhauer and Dunning mention as ‘asymmetric feedback’, and hence also with help requests. Fazio et al. also found a generalization asymmetry in favor of negative attitudes versus positive ones. They attributed it to a negativity bias that “weights resemblance to a known negative more heavily than resemblance to a positive” (p. 300). Applied to help seeking scenarios, this would mean that when facing an unknown situation, people would tend to generalize and infer that is more likely that they get a negative rather than a positive outcome from it, so, along with what it was said before, people will be more inclined to think that they will get a “no” when requesting help. Denrell and Le Mens (2011) present a different perspective when trying to explain judgment biases in general. They deviate from the classical inappropriate information processing (depicted among other by Fiske & Taylor, 2007, and Tversky & Kahneman, 1974) and explain this in terms of ‘adaptive sampling’. Adaptive sampling is a sampling mechanism in which the selection of sample items is conditioned by the values of the variable of interest previously observed (Thompson, 2011). Sampling adaptively allows individuals to safeguard themselves from experiences they went through once and turned out to lay negative outcomes. However, it also prevents them from giving a second chance to those experiences to get an updated outcome that could maybe turn into a positive one, a more positive one, or just one that regresses to the mean, whatever direction that implies. That, as Denrell and Le Mens (2011) explained, makes sense: If you go to a restaurant, and you did not like the food, you do not choose that restaurant again. This is what we think could be happening when asking for help: When we get a “no”, we stop asking. And here, we want to provide a complementary explanation for the underestimation of the probability that others comply with our direct help requests based on adaptive sampling. First, we will develop and explain a model that represents the theory. Later on, we will test it empirically by means of experiments, and will elaborate on the analysis of its results.
Resumo:
Forecasting coal resources and reserves is critical for coal mine development. Thickness maps are commonly used for assessing coal resources and reserves; however they are limited for capturing coal splitting effects in thick and heterogeneous coal zones. As an alternative, three-dimensional geostatistical methods are used to populate facies distributionwithin a densely drilled heterogeneous coal zone in the As Pontes Basin (NWSpain). Coal distribution in this zone is mainly characterized by coal-dominated areas in the central parts of the basin interfingering with terrigenous-dominated alluvial fan zones at the margins. The three-dimensional models obtained are applied to forecast coal resources and reserves. Predictions using subsets of the entire dataset are also generated to understand the performance of methods under limited data constraints. Three-dimensional facies interpolation methods tend to overestimate coal resources and reserves due to interpolation smoothing. Facies simulation methods yield similar resource predictions than conventional thickness map approximations. Reserves predicted by facies simulation methods are mainly influenced by: a) the specific coal proportion threshold used to determine if a block can be recovered or not, and b) the capability of the modelling strategy to reproduce areal trends in coal proportions and splitting between coal-dominated and terrigenousdominated areas of the basin. Reserves predictions differ between the simulation methods, even with dense conditioning datasets. Simulation methods can be ranked according to the correlation of their outputs with predictions from the directly interpolated coal proportion maps: a) with low-density datasets sequential indicator simulation with trends yields the best correlation, b) with high-density datasets sequential indicator simulation with post-processing yields the best correlation, because the areal trends are provided implicitly by the dense conditioning data.
Resumo:
The research on T cell immunosuppression therapies has attracted most of the attention in clinical transplantation. However, B cells and humoral immune responses are increasingly acknowledged as crucial mediators of chronic allograft rejection. Indeed, humoral immune responses can lead to renal allograft rejection even in patients whose cell-mediated immune responses are well controlled. On the other hand, newly studied B cell subsets with regulatory effects have been linked to tolerance achievement in transplantation. Better understanding of the regulatory and effector B cell responses may therefore lead to new therapeutic approaches. Mesenchymal stem cells (MSC) are arising as a potent therapeutic tool in transplantation due to their regenerative and immunomodulatory properties.The research on MSCs has mainly focused on their effects onT cells and although data regarding the modulatory effects of MSCs on alloantigen-specific humoral response in humans is scarce, it has been demonstrated that MSCs significantly affect B cell functioning. In the present review we will analyze and discuss the results in this field.
Resumo:
Horizontal acquisition of DNA by bacteria dramatically increases genetic diversity and hence successful bacterial colonization of several niches, including the human host. A relevant issue is how this newly acquired DNA interacts and integrates in the regulatory networks of the bacterial cell. The global modulator H-NS targets both core genome and HGT genes and silences gene expression in response to external stimuli such as osmolarity and temperature. Here we provide evidence that H-NS discriminates and differentially modulates core and HGT DNA. As an example of this, plasmid R27-encoded H-NS protein has evolved to selectively silence HGT genes and does not interfere with core genome regulation. In turn, differential regulation of both gene lineages by resident chromosomal H-NS requires a helper protein: the Hha protein. Tight silencing of HGT DNA is accomplished by H-NS-Hha complexes. In contrast, core genes are modulated by H-NS homoligomers. Remarkably, the presence of Hha-like proteins is restricted to the Enterobacteriaceae. In addition, conjugative plasmids encoding H-NS variants have hitherto been isolated only from members of the family. Thus, the H-NS system in enteric bacteria presents unique evolutionary features. The capacity to selectively discriminate between core and HGT DNA may help to maintain horizontally transmitted DNA in silent form and may give these bacteria a competitive advantage in adapting to new environments, including host colonization.
Resumo:
Horizontal acquisition of DNA by bacteria dramatically increases genetic diversity and hence successful bacterial colonization of several niches, including the human host. A relevant issue is how this newly acquired DNA interacts and integrates in the regulatory networks of the bacterial cell. The global modulator H-NS targets both core genome and HGT genes and silences gene expression in response to external stimuli such as osmolarity and temperature. Here we provide evidence that H-NS discriminates and differentially modulates core and HGT DNA. As an example of this, plasmid R27-encoded H-NS protein has evolved to selectively silence HGT genes and does not interfere with core genome regulation. In turn, differential regulation of both gene lineages by resident chromosomal H-NS requires a helper protein: the Hha protein. Tight silencing of HGT DNA is accomplished by H-NS-Hha complexes. In contrast, core genes are modulated by H-NS homoligomers. Remarkably, the presence of Hha-like proteins is restricted to the Enterobacteriaceae. In addition, conjugative plasmids encoding H-NS variants have hitherto been isolated only from members of the family. Thus, the H-NS system in enteric bacteria presents unique evolutionary features. The capacity to selectively discriminate between core and HGT DNA may help to maintain horizontally transmitted DNA in silent form and may give these bacteria a competitive advantage in adapting to new environments, including host colonization.
Resumo:
The proposed transdisciplinary field of ‘complexics’ would bring together allcontemporary efforts in any specific disciplines or by any researchersspecifically devoted to constructing tools, procedures, models and conceptsintended for transversal application that are aimed at understanding andexplaining the most interwoven and dynamic phenomena of reality. Our aimneeds to be, as Morin says, not “to reduce complexity to simplicity, [but] totranslate complexity into theory”.New tools for the conception, apprehension and treatment of the data ofexperience will need to be devised to complement existing ones and toenable us to make headway toward practices that better fit complexictheories. New mathematical and computational contributions have alreadycontinued to grow in number, thanks primarily to scholars in statisticalphysics and computer science, who are now taking an interest in social andeconomic phenomena.Certainly, these methodological innovations put into question and againmake us take note of the excessive separation between the training receivedby researchers in the ‘sciences’ and in the ‘arts’. Closer collaborationbetween these two subsets would, in all likelihood, be much moreenergising and creative than their current mutual distance. Humancomplexics must be seen as multi-methodological, insofar as necessarycombining quantitative-computation methodologies and more qualitativemethodologies aimed at understanding the mental and emotional world ofpeople.In the final analysis, however, models always have a narrative runningbehind them that reflects the attempts of a human being to understand theworld, and models are always interpreted on that basis.
Resumo:
The proposed transdisciplinary field of ‘complexics’ would bring together allcontemporary efforts in any specific disciplines or by any researchersspecifically devoted to constructing tools, procedures, models and conceptsintended for transversal application that are aimed at understanding andexplaining the most interwoven and dynamic phenomena of reality. Our aimneeds to be, as Morin says, not “to reduce complexity to simplicity, [but] totranslate complexity into theory”.New tools for the conception, apprehension and treatment of the data ofexperience will need to be devised to complement existing ones and toenable us to make headway toward practices that better fit complexictheories. New mathematical and computational contributions have alreadycontinued to grow in number, thanks primarily to scholars in statisticalphysics and computer science, who are now taking an interest in social andeconomic phenomena.Certainly, these methodological innovations put into question and againmake us take note of the excessive separation between the training receivedby researchers in the ‘sciences’ and in the ‘arts’. Closer collaborationbetween these two subsets would, in all likelihood, be much moreenergising and creative than their current mutual distance. Humancomplexics must be seen as multi-methodological, insofar as necessarycombining quantitative-computation methodologies and more qualitativemethodologies aimed at understanding the mental and emotional world ofpeople.In the final analysis, however, models always have a narrative runningbehind them that reflects the attempts of a human being to understand theworld, and models are always interpreted on that basis.