984 resultados para Relaxation oscillators


Relevância:

10.00% 10.00%

Publicador:

Resumo:

Despite advances in understanding basic organizational principles of the human basal ganglia, accurate in vivo assessment of their anatomical properties is essential to improve early diagnosis in disorders with corticosubcortical pathology and optimize target planning in deep brain stimulation. Main goal of this study was the detailed topological characterization of limbic, associative, and motor subdivisions of the subthalamic nucleus (STN) in relation to corresponding corticosubcortical circuits. To this aim, we used magnetic resonance imaging and investigated independently anatomical connectivity via white matter tracts next to brain tissue properties. On the basis of probabilistic diffusion tractography we identified STN subregions with predominantly motor, associative, and limbic connectivity. We then computed for each of the nonoverlapping STN subregions the covariance between local brain tissue properties and the rest of the brain using high-resolution maps of magnetization transfer (MT) saturation and longitudinal (R1) and transverse relaxation rate (R2*). The demonstrated spatial distribution pattern of covariance between brain tissue properties linked to myelin (R1 and MT) and iron (R2*) content clearly segregates between motor and limbic basal ganglia circuits. We interpret the demonstrated covariance pattern as evidence for shared tissue properties within a functional circuit, which is closely linked to its function. Our findings open new possibilities for investigation of changes in the established covariance pattern aiming at accurate diagnosis of basal ganglia disorders and prediction of treatment outcome.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Background: The ultimate goal of synthetic biology is the conception and construction of genetic circuits that are reliable with respect to their designed function (e.g. oscillators, switches). This task remains still to be attained due to the inherent synergy of the biological building blocks and to an insufficient feedback between experiments and mathematical models. Nevertheless, the progress in these directions has been substantial. Results: It has been emphasized in the literature that the architecture of a genetic oscillator must include positive (activating) and negative (inhibiting) genetic interactions in order to yield robust oscillations. Our results point out that the oscillatory capacity is not only affected by the interaction polarity but by how it is implemented at promoter level. For a chosen oscillator architecture, we show by means of numerical simulations that the existence or lack of competition between activator and inhibitor at promoter level affects the probability of producing oscillations and also leaves characteristic fingerprints on the associated period/amplitude features. Conclusions: In comparison with non-competitive binding at promoters, competition drastically reduces the region of the parameters space characterized by oscillatory solutions. Moreover, while competition leads to pulse-like oscillations with long-tail distribution in period and amplitude for various parameters or noisy conditions, the non-competitive scenario shows a characteristic frequency and confined amplitude values. Our study also situates the competition mechanism in the context of existing genetic oscillators, with emphasis on the Atkinson oscillator.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Résumé Introduction : Le tabagisme est le facteur de risque le plus important dans 7 des 14 premières causes de décès chez les personnes de plus de 65 ans. De nombreuses études ont démontré les bénéfices sur la santé d'un arrêt du tabagisme même à un âge avancé. Malgré cela, peu d'actions préventives sont entreprises dans cette population. Le but de ce travail est d'analyser les caractéristiques du tabagisme et de l'arrêt du tabagisme spécifiquement chez les fumeuses d'âge avancé afin de mieux les aider dans leur désir d'arrêter. Méthode : Nous avons évalué les caractéristiques tabagiques au sein d'une étude prospective de 7'609 femmes vivant en Suisse, âgées de plus de 70 ans et physiquement autonomes (étude Semof s'intéressant à la mesure de l'ostéoporose par ultrason osseux). Un questionnaire sur les habitudes tabagiques a été envoyé aux 486 fumeuses éligibles de la cohorte. Leurs stades de dépendance nicotinique et de motivation ont été évalués à l'aide respectivement des scores «Heavy Smoking Index» et « Prochaska ». Les participantes ayant cessé de fumer pendant le suivi ont été questionnées sur, les motivations, les raisons et les méthodes de leur arrêt. Résultats : 424 femmes ont retourné le questionnaire (taux de réponse de 87%) parmi lesquelles 372 ont répondu de façon complète permettant leur inclusion. L'âge moyen s'élevait à 74,5 ans. La consommation moyenne était de 12 cigarettes par jour, sur une moyenne de 51 ans avec une préférence pour les cigarettes dites « légères » ou « light ». Un peu plus de la moitié des participantes avait une consommation entre 1 et 10 cigarettes par jour et la grande majorité (78%) présentait un score de dépendance faible. Les raisons du tabagisme les plus fréquemment évoquées étaient la relaxation, le plaisir et l'habitude. Les principaux obstacles mentionnés : arrêter à un âge avancé n'a pas de bénéfice, fumer peu ou des cigarettes dites light n'a pas d'impact sur la santé et fumer n'augmente pas le risque d'ostéoporose. Le désir d'arrêter était positivement associé avec un début tardif du tabagisme, une éducation plutôt modeste et la considération que d'arrêter est difficile. Durant le suivi de 3 ans, 57 femmes sur 372 (15%) ont arrêté de-fumer avec succès. Le fait d'être une fumeuse occasionnelle (moins de 1 cigarette par jour) et de considérer que d'arrêter de fumer n'est pas difficile était associés à un meilleur taux d'arrêt du tabagisme. Seuls 11% des femmes ayant stoppé la cigarette signalaient avoir reçu des conseils de leur médecin. Conclusion : ces données illustrent le comportement tabagique spécifique des fumeuses d'âge avancé (consommation et dépendance plutôt faibles) et suggèrent que les interventions médicales pour l'aide à l'arrêt du tabagisme devraient intégrer ces caractéristiques. La volonté d'arrêter est associée à un niveau d'éducation plutôt modeste. Les obstacles les plus fréquemment mentionnés sont basés sur des appréciations erronées de l'impact du tabagisme sur la santé.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

High Resolution Magic Angle Spinning (HR-MAS) NMR allows metabolic characterization of biopsies. HR-MAS spectra from tissues of most organs show strong lipid contributions that are overlapping metabolite regions, which hamper metabolite estimation. Metabolite quantification and analysis would benefit from a separation of lipids and small metabolites. Generally, a relaxation filter is used to reduce lipid contributions. However, the strong relaxation filter required to eliminate most of the lipids also reduces the signals for small metabolites. The aim of our study was therefore to investigate different diffusion editing techniques in order to employ diffusion differences for separating lipid and small metabolite contributions in the spectra from different organs for unbiased metabonomic analysis. Thus, 1D and 2D diffusion measurements were performed, and pure lipid spectra that were obtained at strong diffusion weighting (DW) were subtracted from those obtained at low DW, which include both small metabolites and lipids. This subtraction yielded almost lipid free small metabolite spectra from muscle tissue. Further improved separation was obtained by combining a 1D diffusion sequence with a T2-filter, with the subtraction method eliminating residual lipids from the spectra. Similar results obtained for biopsies of different organs suggest that this method is applicable in various tissue types. The elimination of lipids from HR-MAS spectra and the resulting less biased assessment of small metabolites have potential to remove ambiguities in the interpretation of metabonomic results. This is demonstrated in a reproducibility study on biopsies from human muscle.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

AbstractFor a wide range of environmental, hydrological, and engineering applications there is a fast growing need for high-resolution imaging. In this context, waveform tomographic imaging of crosshole georadar data is a powerful method able to provide images of pertinent electrical properties in near-surface environments with unprecedented spatial resolution. In contrast, conventional ray-based tomographic methods, which consider only a very limited part of the recorded signal (first-arrival traveltimes and maximum first-cycle amplitudes), suffer from inherent limitations in resolution and may prove to be inadequate in complex environments. For a typical crosshole georadar survey the potential improvement in resolution when using waveform-based approaches instead of ray-based approaches is in the range of one order-of- magnitude. Moreover, the spatial resolution of waveform-based inversions is comparable to that of common logging methods. While in exploration seismology waveform tomographic imaging has become well established over the past two decades, it is comparably still underdeveloped in the georadar domain despite corresponding needs. Recently, different groups have presented finite-difference time-domain waveform inversion schemes for crosshole georadar data, which are adaptations and extensions of Tarantola's seminal nonlinear generalized least-squares approach developed for the seismic case. First applications of these new crosshole georadar waveform inversion schemes on synthetic and field data have shown promising results. However, there is little known about the limits and performance of such schemes in complex environments. To this end, the general motivation of my thesis is the evaluation of the robustness and limitations of waveform inversion algorithms for crosshole georadar data in order to apply such schemes to a wide range of real world problems.One crucial issue to making applicable and effective any waveform scheme to real-world crosshole georadar problems is the accurate estimation of the source wavelet, which is unknown in reality. Waveform inversion schemes for crosshole georadar data require forward simulations of the wavefield in order to iteratively solve the inverse problem. Therefore, accurate knowledge of the source wavelet is critically important for successful application of such schemes. Relatively small differences in the estimated source wavelet shape can lead to large differences in the resulting tomograms. In the first part of my thesis, I explore the viability and robustness of a relatively simple iterative deconvolution technique that incorporates the estimation of the source wavelet into the waveform inversion procedure rather than adding additional model parameters into the inversion problem. Extensive tests indicate that this source wavelet estimation technique is simple yet effective, and is able to provide remarkably accurate and robust estimates of the source wavelet in the presence of strong heterogeneity in both the dielectric permittivity and electrical conductivity as well as significant ambient noise in the recorded data. Furthermore, our tests also indicate that the approach is insensitive to the phase characteristics of the starting wavelet, which is not the case when directly incorporating the wavelet estimation into the inverse problem.Another critical issue with crosshole georadar waveform inversion schemes which clearly needs to be investigated is the consequence of the common assumption of frequency- independent electromagnetic constitutive parameters. This is crucial since in reality, these parameters are known to be frequency-dependent and complex and thus recorded georadar data may show significant dispersive behaviour. In particular, in the presence of water, there is a wide body of evidence showing that the dielectric permittivity can be significantly frequency dependent over the GPR frequency range, due to a variety of relaxation processes. The second part of my thesis is therefore dedicated to the evaluation of the reconstruction limits of a non-dispersive crosshole georadar waveform inversion scheme in the presence of varying degrees of dielectric dispersion. I show that the inversion algorithm, combined with the iterative deconvolution-based source wavelet estimation procedure that is partially able to account for the frequency-dependent effects through an "effective" wavelet, performs remarkably well in weakly to moderately dispersive environments and has the ability to provide adequate tomographic reconstructions.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This thesis concerns organizing a workshop about interaction in the various communities represented by Helsinki city's social welfare department. There were seventeen workshops altogether and they were organized in different communities; for example, in a children's daycare centre. The aim was to gain experience in the planning and organizing of these kinds of workshop. The workshops focused on dealing with interactive quetions arising out of the very community which was taking part in the workshop. These questions were discussed and handled using a technique called forum-statues. This means that the problems arising from the community were presented as living pictures to the group. There was also a short theoretical element concerning interactions between people, self-esteem, the different phases in developing a group, and the effects of conflicts in groups. There was a high degree of interest in the research and places were soon filled. The workshops consisted of a warming part with a lot of playing, a deepening part with questions arising out of the group, and a relaxation and feedback part at the end. The basis of the workshop was similar for all seventeen workshops. The athmosphere in workshops was discussive and open. The participants were engouraged to express there opinions and point of views. Feedback from the participantswas very positive. The participants obtained new points of view, according there fellow workers, and the community spirit improved. Shortage of time was, unfortunately, a problem. With more time, it would have been possible to go deeper into the problems of interaction within the community. Ceratinly, the research proved that there would be great demand for this kind of workshop in future.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The properties and cosmological importance of a class of non-topological solitons, Q-balls, are studied. Aspects of Q-ball solutions and Q-ball cosmology discussed in the literature are reviewed. Q-balls are particularly considered in the Minimal Supersymmetric Standard Model with supersymmetry broken by a hidden sector mechanism mediated by either gravity or gauge interactions. Q-ball profiles, charge-energy relations and evaporation rates for realistic Q-ball profiles are calculated for general polynomial potentials and for the gravity mediated scenario. In all of the cases, the evaporation rates are found to increase with decreasing charge. Q-ball collisions are studied by numerical means in the two supersymmetry breaking scenarios. It is noted that the collision processes can be divided into three types: fusion, charge transfer and elastic scattering. Cross-sections are calculated for the different types of processes in the different scenarios. The formation of Q-balls from the fragmentation of the Aflieck-Dine -condensate is studied by numerical and analytical means. The charge distribution is found to depend strongly on the initial energy-charge ratio of the condensate. The final state is typically noted to consist of Q- and anti-Q-balls in a state of maximum entropy. By studying the relaxation of excited Q-balls the rate at which excess energy can be emitted is calculated in the gravity mediated scenario. The Q-ball is also found to withstand excess energy well without significant charge loss. The possible cosmological consequences of these Q-ball properties are discussed.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We present a polyhedral framework for establishing general structural properties on optimal solutions of stochastic scheduling problems, where multiple job classes vie for service resources: the existence of an optimal priority policy in a given family, characterized by a greedoid (whose feasible class subsets may receive higher priority), where optimal priorities are determined by class-ranking indices, under restricted linear performance objectives (partial indexability). This framework extends that of Bertsimas and Niño-Mora (1996), which explained the optimality of priority-index policies under all linear objectives (general indexability). We show that, if performance measures satisfy partial conservation laws (with respect to the greedoid), which extend previous generalized conservation laws, then the problem admits a strong LP relaxation over a so-called extended greedoid polytope, which has strong structural and algorithmic properties. We present an adaptive-greedy algorithm (which extends Klimov's) taking as input the linear objective coefficients, which (1) determines whether the optimal LP solution is achievable by a policy in the given family; and (2) if so, computes a set of class-ranking indices that characterize optimal priority policies in the family. In the special case of project scheduling, we show that, under additional conditions, the optimal indices can be computed separately for each project (index decomposition). We further apply the framework to the important restless bandit model (two-action Markov decision chains), obtaining new index policies, that extend Whittle's (1988), and simple sufficient conditions for their validity. These results highlight the power of polyhedral methods (the so-called achievable region approach) in dynamic and stochastic optimization.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

General Introduction This thesis can be divided into two main parts :the first one, corresponding to the first three chapters, studies Rules of Origin (RoOs) in Preferential Trade Agreements (PTAs); the second part -the fourth chapter- is concerned with Anti-Dumping (AD) measures. Despite wide-ranging preferential access granted to developing countries by industrial ones under North-South Trade Agreements -whether reciprocal, like the Europe Agreements (EAs) or NAFTA, or not, such as the GSP, AGOA, or EBA-, it has been claimed that the benefits from improved market access keep falling short of the full potential benefits. RoOs are largely regarded as a primary cause of the under-utilization of improved market access of PTAs. RoOs are the rules that determine the eligibility of goods to preferential treatment. Their economic justification is to prevent trade deflection, i.e. to prevent non-preferred exporters from using the tariff preferences. However, they are complex, cost raising and cumbersome, and can be manipulated by organised special interest groups. As a result, RoOs can restrain trade beyond what it is needed to prevent trade deflection and hence restrict market access in a statistically significant and quantitatively large proportion. Part l In order to further our understanding of the effects of RoOs in PTAs, the first chapter, written with Pr. Olivier Cadot, Celine Carrère and Pr. Jaime de Melo, describes and evaluates the RoOs governing EU and US PTAs. It draws on utilization-rate data for Mexican exports to the US in 2001 and on similar data for ACP exports to the EU in 2002. The paper makes two contributions. First, we construct an R-index of restrictiveness of RoOs along the lines first proposed by Estevadeordal (2000) for NAFTA, modifying it and extending it for the EU's single-list (SL). This synthetic R-index is then used to compare Roos under NAFTA and PANEURO. The two main findings of the chapter are as follows. First, it shows, in the case of PANEURO, that the R-index is useful to summarize how countries are differently affected by the same set of RoOs because of their different export baskets to the EU. Second, it is shown that the Rindex is a relatively reliable statistic in the sense that, subject to caveats, after controlling for the extent of tariff preference at the tariff-line level, it accounts for differences in utilization rates at the tariff line level. Finally, together with utilization rates, the index can be used to estimate total compliance costs of RoOs. The second chapter proposes a reform of preferential Roos with the aim of making them more transparent and less discriminatory. Such a reform would make preferential blocs more "cross-compatible" and would therefore facilitate cumulation. It would also contribute to move regionalism toward more openness and hence to make it more compatible with the multilateral trading system. It focuses on NAFTA, one of the most restrictive FTAs (see Estevadeordal and Suominen 2006), and proposes a way forward that is close in spirit to what the EU Commission is considering for the PANEURO system. In a nutshell, the idea is to replace the current array of RoOs by a single instrument- Maximum Foreign Content (MFC). An MFC is a conceptually clear and transparent instrument, like a tariff. Therefore changing all instruments into an MFC would bring improved transparency pretty much like the "tariffication" of NTBs. The methodology for this exercise is as follows: In step 1, I estimate the relationship between utilization rates, tariff preferences and RoOs. In step 2, I retrieve the estimates and invert the relationship to get a simulated MFC that gives, line by line, the same utilization rate as the old array of Roos. In step 3, I calculate the trade-weighted average of the simulated MFC across all lines to get an overall equivalent of the current system and explore the possibility of setting this unique instrument at a uniform rate across lines. This would have two advantages. First, like a uniform tariff, a uniform MFC would make it difficult for lobbies to manipulate the instrument at the margin. This argument is standard in the political-economy literature and has been used time and again in support of reductions in the variance of tariffs (together with standard welfare considerations). Second, uniformity across lines is the only way to eliminate the indirect source of discrimination alluded to earlier. Only if two countries face uniform RoOs and tariff preference will they face uniform incentives irrespective of their initial export structure. The result of this exercise is striking: the average simulated MFC is 25% of good value, a very low (i.e. restrictive) level, confirming Estevadeordal and Suominen's critical assessment of NAFTA's RoOs. Adopting a uniform MFC would imply a relaxation from the benchmark level for sectors like chemicals or textiles & apparel, and a stiffening for wood products, papers and base metals. Overall, however, the changes are not drastic, suggesting perhaps only moderate resistance to change from special interests. The third chapter of the thesis considers whether Europe Agreements of the EU, with the current sets of RoOs, could be the potential model for future EU-centered PTAs. First, I have studied and coded at the six-digit level of the Harmonised System (HS) .both the old RoOs -used before 1997- and the "Single list" Roos -used since 1997. Second, using a Constant Elasticity Transformation function where CEEC exporters smoothly mix sales between the EU and the rest of the world by comparing producer prices on each market, I have estimated the trade effects of the EU RoOs. The estimates suggest that much of the market access conferred by the EAs -outside sensitive sectors- was undone by the cost-raising effects of RoOs. The chapter also contains an analysis of the evolution of the CEECs' trade with the EU from post-communism to accession. Part II The last chapter of the thesis is concerned with anti-dumping, another trade-policy instrument having the effect of reducing market access. In 1995, the Uruguay Round introduced in the Anti-Dumping Agreement (ADA) a mandatory "sunset-review" clause (Article 11.3 ADA) under which anti-dumping measures should be reviewed no later than five years from their imposition and terminated unless there was a serious risk of resumption of injurious dumping. The last chapter, written with Pr. Olivier Cadot and Pr. Jaime de Melo, uses a new database on Anti-Dumping (AD) measures worldwide to assess whether the sunset-review agreement had any effect. The question we address is whether the WTO Agreement succeeded in imposing the discipline of a five-year cycle on AD measures and, ultimately, in curbing their length. Two methods are used; count data analysis and survival analysis. First, using Poisson and Negative Binomial regressions, the count of AD measures' revocations is regressed on (inter alia) the count of "initiations" lagged five years. The analysis yields a coefficient on measures' initiations lagged five years that is larger and more precisely estimated after the agreement than before, suggesting some effect. However the coefficient estimate is nowhere near the value that would give a one-for-one relationship between initiations and revocations after five years. We also find that (i) if the agreement affected EU AD practices, the effect went the wrong way, the five-year cycle being quantitatively weaker after the agreement than before; (ii) the agreement had no visible effect on the United States except for aone-time peak in 2000, suggesting a mopping-up of old cases. Second, the survival analysis of AD measures around the world suggests a shortening of their expected lifetime after the agreement, and this shortening effect (a downward shift in the survival function postagreement) was larger and more significant for measures targeted at WTO members than for those targeted at non-members (for which WTO disciplines do not bind), suggesting that compliance was de jure. A difference-in-differences Cox regression confirms this diagnosis: controlling for the countries imposing the measures, for the investigated countries and for the products' sector, we find a larger increase in the hazard rate of AD measures covered by the Agreement than for other measures.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Connexin 40 (Cx40) is expressed by the renin-producing cells (RSCs) of the kidneys and the endothelial cells of blood vessels. Cx40 null mice (Cx40(-/-)) feature a much increased renin synthesis and secretion, which results in chronic hypertension, and also display an altered endothelium-dependent relaxation of the aorta because of reduced eNOS levels and nitric oxide production. To discriminate the effect of Cx40 in renin secretion and vascular signaling, we targeted Cx40 to either the RSCs or the endothelial cells of Cx40 null mice. When compared with Cx40(-/-) controls, the animals expressing Cx40 in RSCs were less hypertensive and featured reduced renin levels, still numerous RSCs outside the wall of the afferent arterioles. In contrast, mice expressing Cx40 in the endothelial cells were as hypertensive as Cx40(-/-) mice, in spite of control levels of Cx37 and eNOS. Our data show that blood pressure is improved by restoration of Cx40 expression in RSCs but not in endothelial cells, stressing the prominent role of renin in the mouse hypertension linked to loss of Cx40.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Summary [résumé français voir ci-dessous] From the beginning of the 20th century the world population has been confronted with the human immune deficiency virus 1 (HIV-1). This virus has the particularity to mutate fast, and could thus evade and adapt to the human host. Our closest evolutionary related organisms, the non-human primates, are less susceptible to HIV-1. In a broader sense, primates are differentially susceptible to various retrovirus. Species specificity may be due to genetic differences among primates. In the present study we applied evolutionary and comparative genetic techniques to characterize the evolutionary pattern of host cellular determinants of HIV-1 pathogenesis. The study of the evolution of genes coding for proteins participating to the restriction or pathogenesis of HIV-1 may help understanding the genetic basis of modern human susceptibility to infection. To perform comparative genetics analysis, we constituted a collection of primate DNA and RNA to allow generation of de novo sequence of gene orthologs. More recently, release to the public domain of two new primate complete genomes (bornean orang-utan and common marmoset) in addition of the three previously available genomes (human, chimpanzee and Rhesus monkey) help scaling up the evolutionary and comparative genome analysis. Sequence analysis used phylogenetic and statistical methods for detecting molecular adaptation. We identified different selective pressures acting on host proteins involved in HIV-1 pathogenesis. Proteins with HIV-1 restriction properties in non-human primates were under strong positive selection, in particular in regions of interaction with viral proteins. These regions carried key residues for the antiviral activity. Proteins of the innate immunity presented an evolutionary pattern of conservation (purifying selection) but with signals of relaxed constrain if we compared them to the average profile of purifying selection of the primate genomes. Large scale analysis resulted in patterns of evolutionary pressures according to molecular function, biological process and cellular distribution. The data generated by various analyses served to guide the ancestral reconstruction of TRIM5a a potent antiviral host factor. The resurrected TRIM5a from the common ancestor of Old world monkeys was effective against HIV-1 and the recent resurrected hominoid variants were more effective against other retrovirus. Thus, as the result of trade-offs in the ability to restrict different retrovirus, human might have been exposed to HIV-1 at a time when TRIM5a lacked the appropriate specific restriction activity. The application of evolutionary and comparative genetic tools should be considered for the systematical assessment of host proteins relevant in viral pathogenesis, and to guide biological and functional studies. Résumé La population mondiale est confrontée depuis le début du vingtième siècle au virus de l'immunodéficience humaine 1 (VIH-1). Ce virus a un taux de mutation particulièrement élevé, il peut donc s'évader et s'adapter très efficacement à son hôte. Les organismes évolutivement le plus proches de l'homme les primates nonhumains sont moins susceptibles au VIH-1. De façon générale, les primates répondent différemment aux rétrovirus. Cette spécificité entre espèces doit résider dans les différences génétiques entre primates. Dans cette étude nous avons appliqué des techniques d'évolution et de génétique comparative pour caractériser le modèle évolutif des déterminants cellulaires impliqués dans la pathogenèse du VIH- 1. L'étude de l'évolution des gènes, codant pour des protéines impliquées dans la restriction ou la pathogenèse du VIH-1, aidera à la compréhension des bases génétiques ayant récemment rendu l'homme susceptible. Pour les analyses de génétique comparative, nous avons constitué une collection d'ADN et d'ARN de primates dans le but d'obtenir des nouvelles séquences de gènes orthologues. Récemment deux nouveaux génomes complets ont été publiés (l'orang-outan du Bornéo et Marmoset commun) en plus des trois génomes déjà disponibles (humain, chimpanzé, macaque rhésus). Ceci a permis d'améliorer considérablement l'étendue de l'analyse. Pour détecter l'adaptation moléculaire nous avons analysé les séquences à l'aide de méthodes phylogénétiques et statistiques. Nous avons identifié différentes pressions de sélection agissant sur les protéines impliquées dans la pathogenèse du VIH-1. Des protéines avec des propriétés de restriction du VIH-1 dans les primates non-humains présentent un taux particulièrement haut de remplacement d'acides aminés (sélection positive). En particulier dans les régions d'interaction avec les protéines virales. Ces régions incluent des acides aminés clé pour l'activité de restriction. Les protéines appartenant à l'immunité inné présentent un modèle d'évolution de conservation (sélection purifiante) mais avec des traces de "relaxation" comparé au profil général de sélection purifiante du génome des primates. Une analyse à grande échelle a permis de classifier les modèles de pression évolutive selon leur fonction moléculaire, processus biologique et distribution cellulaire. Les données générées par les différentes analyses ont permis la reconstruction ancestrale de TRIM5a, un puissant facteur antiretroviral. Le TRIM5a ressuscité, correspondant à l'ancêtre commun entre les grands singes et les groupe des catarrhiniens, est efficace contre le VIH-1 moderne. Les TRIM5a ressuscités plus récents, correspondant aux ancêtres des grands singes, sont plus efficaces contre d'autres rétrovirus. Ainsi, trouver un compromis dans la capacité de restreindre différents rétrovirus, l'homme aurait été exposé au VIH-1 à une période où TRIM5a manquait d'activité de restriction spécifique contre celui-ci. L'application de techniques d'évolution et de génétique comparative devraient être considérées pour l'évaluation systématique de protéines impliquées dans la pathogenèse virale, ainsi que pour guider des études biologiques et fonctionnelles

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Models incorporating more realistic models of customer behavior, as customers choosing froman offer set, have recently become popular in assortment optimization and revenue management.The dynamic program for these models is intractable and approximated by a deterministiclinear program called the CDLP which has an exponential number of columns. However, whenthe segment consideration sets overlap, the CDLP is difficult to solve. Column generationhas been proposed but finding an entering column has been shown to be NP-hard. In thispaper we propose a new approach called SDCP to solving CDLP based on segments and theirconsideration sets. SDCP is a relaxation of CDLP and hence forms a looser upper bound onthe dynamic program but coincides with CDLP for the case of non-overlapping segments. Ifthe number of elements in a consideration set for a segment is not very large (SDCP) can beapplied to any discrete-choice model of consumer behavior. We tighten the SDCP bound by(i) simulations, called the randomized concave programming (RCP) method, and (ii) by addingcuts to a recent compact formulation of the problem for a latent multinomial-choice model ofdemand (SBLP+). This latter approach turns out to be very effective, essentially obtainingCDLP value, and excellent revenue performance in simulations, even for overlapping segments.By formulating the problem as a separation problem, we give insight into why CDLP is easyfor the MNL with non-overlapping considerations sets and why generalizations of MNL posedifficulties. We perform numerical simulations to determine the revenue performance of all themethods on reference data sets in the literature.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Models incorporating more realistic models of customer behavior, as customers choosing from an offerset, have recently become popular in assortment optimization and revenue management. The dynamicprogram for these models is intractable and approximated by a deterministic linear program called theCDLP which has an exponential number of columns. When there are products that are being consideredfor purchase by more than one customer segment, CDLP is difficult to solve since column generationis known to be NP-hard. However, recent research indicates that a formulation based on segments withcuts imposing consistency (SDCP+) is tractable and approximates the CDLP value very closely. In thispaper we investigate the structure of the consideration sets that make the two formulations exactly equal.We show that if the segment consideration sets follow a tree structure, CDLP = SDCP+. We give acounterexample to show that cycles can induce a gap between the CDLP and the SDCP+ relaxation.We derive two classes of valid inequalities called flow and synchronization inequalities to further improve(SDCP+), based on cycles in the consideration set structure. We give a numeric study showing theperformance of these cycle-based cuts.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Most research on single machine scheduling has assumedthe linearity of job holding costs, which is arguablynot appropriate in some applications. This motivates ourstudy of a model for scheduling $n$ classes of stochasticjobs on a single machine, with the objective of minimizingthe total expected holding cost (discounted or undiscounted). We allow general holding cost rates that are separable,nondecreasing and convex on the number of jobs in eachclass. We formulate the problem as a linear program overa certain greedoid polytope, and establish that it issolved optimally by a dynamic (priority) index rule,whichextends the classical Smith's rule (1956) for the linearcase. Unlike Smith's indices, defined for each class, ournew indices are defined for each extended class, consistingof a class and a number of jobs in that class, and yieldan optimal dynamic index rule: work at each time on a jobwhose current extended class has larger index. We furthershow that the indices possess a decomposition property,as they are computed separately for each class, andinterpret them in economic terms as marginal expected cost rate reductions per unit of expected processing time.We establish the results by deploying a methodology recentlyintroduced by us [J. Niño-Mora (1999). "Restless bandits,partial conservation laws, and indexability. "Forthcomingin Advances in Applied Probability Vol. 33 No. 1, 2001],based on the satisfaction by performance measures of partialconservation laws (PCL) (which extend the generalizedconservation laws of Bertsimas and Niño-Mora (1996)):PCL provide a polyhedral framework for establishing theoptimality of index policies with special structure inscheduling problems under admissible objectives, which weapply to the model of concern.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The choice network revenue management model incorporates customer purchase behavioras a function of the offered products, and is the appropriate model for airline and hotel networkrevenue management, dynamic sales of bundles, and dynamic assortment optimization.The optimization problem is a stochastic dynamic program and is intractable. A certainty-equivalencerelaxation of the dynamic program, called the choice deterministic linear program(CDLP) is usually used to generate dyamic controls. Recently, a compact linear programmingformulation of this linear program was given for the multi-segment multinomial-logit (MNL)model of customer choice with non-overlapping consideration sets. Our objective is to obtaina tighter bound than this formulation while retaining the appealing properties of a compactlinear programming representation. To this end, it is natural to consider the affine relaxationof the dynamic program. We first show that the affine relaxation is NP-complete even for asingle-segment MNL model. Nevertheless, by analyzing the affine relaxation we derive a newcompact linear program that approximates the dynamic programming value function betterthan CDLP, provably between the CDLP value and the affine relaxation, and often comingclose to the latter in our numerical experiments. When the segment consideration sets overlap,we show that some strong equalities called product cuts developed for the CDLP remain validfor our new formulation. Finally we perform extensive numerical comparisons on the variousbounds to evaluate their performance.