934 resultados para Single-commodity capacitated network design problem
Resumo:
This investigation is the final phase of a three part study whose overall objectives were to determine if a restraining force is required to prevent inlet uplift failures in corrugated metal pipe (CMP) installations, and to develop a procedure for calculating the required force when restraint is required. In the initial phase of the study (HR-306), the extent of the uplift problem in Iowa was determined and the forces acting on a CMP were quantified. In the second phase of the study (HR- 332), laboratory and field tests were conducted. Laboratory tests measured the longitudinal stiffness ofCMP and a full scale field test on a 3.05 m (10 ft) diameter CMP with 0.612 m (2 ft) of cover determined the soil-structure interaction in response to uplift forces. Reported herein are the tasks that were completed in the final phase of the study. In this phase, a buried 2.44 m (8 ft) CMP was tested with and without end-restraint and with various configurations of soil at the inlet end of the pipe. A total of four different soil configurations were tested; in all tests the soil cover was constant at 0.61 m (2 ft). Data from these tests were used to verify the finite element analysis model (FEA) that was developed in this phase of the research. Both experiments and analyses indicate that the primary soil contribution to uplift resistance occurs in the foreslope and that depth of soil cover does not affect the required tiedown force. Using the FEA, design charts were developed with which engineers can determine for a given situation if restraint force is required to prevent an uplift failure. If an engineer determines restraint is needed, the design charts provide the magnitude of the required force. The design charts are applicable to six gages of CMP for four flow conditions and two types of soil.
Resumo:
In the administration, planning, design, and maintenance of road systems, transportation professionals often need to choose between alternatives, justify decisions, evaluate tradeoffs, determine how much to spend, set priorities, assess how well the network meets traveler needs, and communicate the basis for their actions to others. A variety of technical guidelines, tools, and methods have been developed to help with these activities. Such work aids include design criteria guidelines, design exception analysis methods, needs studies, revenue allocation schemes, regional planning guides, designation of minimum standards, sufficiency ratings, management systems, point based systems to determine eligibility for paving, functional classification, and bridge ratings. While such tools play valuable roles, they also manifest a number of deficiencies and are poorly integrated. Design guides tell what solutions MAY be used, they aren't oriented towards helping find which one SHOULD be used. Design exception methods help justify deviation from design guide requirements but omit consideration of important factors. Resource distribution is too often based on dividing up what's available rather than helping determine how much should be spent. Point systems serve well as procedural tools but are employed primarily to justify decisions that have already been made. In addition, the tools aren't very scalable: a system level method of analysis seldom works at the project level and vice versa. In conjunction with the issues cited above, the operation and financing of the road and highway system is often the subject of criticisms that raise fundamental questions: What is the best way to determine how much money should be spent on a city or a county's road network? Is the size and quality of the rural road system appropriate? Is too much or too little money spent on road work? What parts of the system should be upgraded and in what sequence? Do truckers receive a hidden subsidy from other motorists? Do transportation professions evaluate road situations from too narrow of a perspective? In considering the issues and questions the author concluded that it would be of value if one could identify and develop a new method that would overcome the shortcomings of existing methods, be scalable, be capable of being understood by the general public, and utilize a broad viewpoint. After trying out a number of concepts, it appeared that a good approach would be to view the road network as a sub-component of a much larger system that also includes vehicles, people, goods-in-transit, and all the ancillary items needed to make the system function. Highway investment decisions could then be made on the basis of how they affect the total cost of operating the total system. A concept, named the "Total Cost of Transportation" method, was then developed and tested. The concept rests on four key principles: 1) that roads are but one sub-system of a much larger 'Road Based Transportation System', 2) that the size and activity level of the overall system are determined by market forces, 3) that the sum of everything expended, consumed, given up, or permanently reserved in building the system and generating the activity that results from the market forces represents the total cost of transportation, and 4) that the economic purpose of making road improvements is to minimize that total cost. To test the practical value of the theory, a special database and spreadsheet model of Iowa's county road network was developed. This involved creating a physical model to represent the size, characteristics, activity levels, and the rates at which the activities take place, developing a companion economic cost model, then using the two in tandem to explore a variety of issues. Ultimately, the theory and model proved capable of being used in full system, partial system, single segment, project, and general design guide levels of analysis. The method appeared to be capable of remedying many of the existing work method defects and to answer society's transportation questions from a new perspective.
Resumo:
Based on the conclusions of IHRB Project TR-444, Demonstration Project Using Railroad Flat Car Bridges for Low Volume Road Bridges, additional research on the use of RRFC bridges was undertaken. This portion of the project investigated the following: (1) Different design and rating procedures; (2) Additional single span configurations plus multiple span configurations; (3) Different mechanisms for connecting adjacent RRFCs and the resulting lateral load distribution factors; (4) Sheet pile abutments; and (5) Behavior RRFCs that had been strengthened so that they could be used on existing abutments. A total of eight RRFC bridges were tested (five single span bridges, two two-span bridges, and one three-span bridge). Based on the results of this study a simplified design and rating procedure has been developed for the economical replacement bridge alternative. In Volume 1, this volume, the results from the testing of four single span RRFC bridges are presented, while in Volume 2 the results from the testing of the strengthened single span bridge plus the three multiple span bridges are presented.
Resumo:
Trenchless technologies are methods used for the construction and rehabilitation of underground utility pipes. These methods are growing increasingly popular due to their versatility and their potential to lower project costs. However, the use of trenchless technologies in Iowa and their effects on surrounding soil and nearby structures has not been adequately documented. Surveys of and interviews with professionals working in trenchless-related industries in Iowa were conducted, and the results were analyzed and compared to survey results from the United States as a whole. The surveys focused on method familiarity, pavement distress observed, reliability of trenchless methods, and future improvements. Results indicate that the frequency of pavement distress or other trenchless-related issues are an ongoing problem in the industry. Inadequate soil information and quality control/quality assurance (QC/QA) are partially to blame. Fieldwork involving the observation of trenchless construction projects was undertaken with the purpose of documenting current practices and applications of trenchless technology in the United States and Iowa. Field tests were performed in which push-in pressure cells were used to measure the soil stresses induced by trenchless construction methods. A program of laboratory soil testing was carried out in conjunction with the field testing. Soil testing showed that the installations were made in sandy clay or well-graded sand with silt and gravel. Pipes were installed primarily using horizontal directional drilling with pipe diameters from 3 to 12 inches. Pressure cell monitoring was conducted during the following construction phases: pilot bore, pre-reaming, and combined pipe pulling and reaming. The greatest increase in lateral earth pressure was 5.6 psi and was detected 2.1 feet from the centerline of the bore during a pilot hole operation in sandy lean clay. Measurements from 1.0 to 2.5 psi were common. Comparisons were made between field measurements and analytical and finite element calculation methods.
Resumo:
Problem solving (including insight, divergent thinking) seems to rely on the right hemisphere (RH). These functions are difficult to assess behaviorally. We propose anagram resolution as a suitable paradigm. University students (n=32) performed three tachistoscopic lateralized visual half-field experiments (stimulus presentation 150ms). In Experiment 1, participants recalled four-letter strings. Subsequently, participants provided solutions for four-letter anagrams (one solution in Experiment 2; two solutions in Experiment 3). Additionally, participants completed a schizotypy questionnaire (O-LIFE). Results showed a right visual field advantage in Experiment 1 and 2, but no visual field advantage in Experiment 3. In Experiment 1, increasing positive schizotypy associated with a RH performance shift. Problem solving seems to require increasingly the RH when facing several rather than one solution. This result supports previous studies on the RH's role in remote associative, metaphor and discourse processing. The more complex language requirements, the less personality traits seem to matter.
Resumo:
Soil consolidation and erosion caused by roadway runoff have exposed the upper portions of steel piles at the abutments of numerous bridges, leaving them susceptible to accelerated corrosion rates due to the abundance of moisture, oxygen, and chlorides at these locations. This problem is compounded by the relative inaccessibility of abutment piles for close-up inspection and repair. The objective of this study was to provide bridge owners with recommendations for effective methods of addressing corrosion of steel abutment piles in existing and future bridges A review of available literature on the performance and protection of steel piles exposed to a variety of environments was performed. Eight potential coating systems for use in protecting existing and/or new piles were selected and subjected to accelerated corrosion conditions in the laboratory. Two surface preparation methods were evaluated in the field and three coating systems were installed on three piles at an existing bridge where abutment piles had been exposed by erosion. In addition, a passive cathodic protection (CP) system using sacrificial zinc anodes was tested in the laboratory. Several trial flowable mortar mixes were evaluated for use in conjunction with the CP system. For existing abutment piles, application of a protective coating system is a promising method of mitigating corrosion. Based on its excellent performance in accelerated corrosion conditions in the laboratory on steel test specimens with SSPC-SP3, -SP6, and -SP10 surface preparations, glass flake polyester is recommended for use on existing piles. An alternative is epoxy over organic zinc rich primer. Surface preparation of existing piles should include abrasive blast cleaning to SSPC-SP6. Although additional field testing is needed, based on the results of the laboratory testing, a passive CP system could provide an effective means of protecting piles in existing bridges when combined with a pumped mortar used to fill voids between the abutment footing and soil. The addition of a corrosion inhibitor to the mortar appears to be beneficial. For new construction, shop application of thermally sprayed aluminum or glass flake polyester to the upper portion of the piles is recommended.
Resumo:
Portland cement pervious concrete (PCPC) is being used more frequently due to its benefits in reducing the quantity of runoff water,improving water quality, enhancing pavement skid resistance during storm events by rapid drainage of water, and reducing pavement noise. In the United States, PCPC typically has high porosity and low strength, which has resulted in the limited use of pervious concrete, especially in hard wet freeze environments (e.g., the Midwestern and Northeastern United States and other parts of the world).Improving the strength and freeze-thaw durability of pervious concrete will allow an increase in its use in these regions. The objective of this research is to develop a PCPC mix that not only has sufficient porosity for stormwater infiltration, but also desirable strength and freeze-thaw durability. In this research, concrete mixes were designed with various sizes and types of aggregates, binder contents, and admixture amounts. The engineering properties of the aggregates were evaluated. Additionally, the porosity, permeability, strength, and freeze-thaw durability of each of these mixes was measured. Results indicate that PCPC made with single-sized aggregate has high permeability but not adequate strength. Adding a small percent of sand to the mix improves its strength and freeze-thaw resistance, but lowers its permeability. Although adding sand and latex improved the strength of the mix when compared with single-sized mixes, the strength of mixes where only sand was added were higher. The freeze-thaw resistance of PCPC mixes with a small percentage of sand also showed 2% mass loss after 300 cycles of freeze-thaw. The preliminary results of the effects of compaction energy on PCPC properties show that compaction energy significantly affects the freeze-thaw durability of PCPC and, to a lesser extent, reduces compressive strength and split strength and increases permeability.
Resumo:
AbstractAlthough the genomes from any two human individuals are more than 99.99% identical at the sequence level, some structural variation can be observed. Differences between genomes include single nucleotide polymorphism (SNP), inversion and copy number changes (gain or loss of DNA). The latter can range from submicroscopic events (CNVs, at least 1kb in size) to complete chromosomal aneuploidies. Small copy number variations have often no (lethal) consequences to the cell, but a few were associated to disease susceptibility and phenotypic variations. Larger re-arrangements (i.e. complete chromosome gain) are frequently associated with more severe consequences on health such as genomic disorders and cancer. High-throughput technologies like DNA microarrays enable the detection of CNVs in a genome-wide fashion. Since the initial catalogue of CNVs in the human genome in 2006, there has been tremendous interest in CNVs both in the context of population and medical genetics. Understanding CNV patterns within and between human populations is essential to elucidate their possible contribution to disease. But genome analysis is a challenging task; the technology evolves rapidly creating needs for novel, efficient and robust analytical tools which need to be compared with existing ones. Also, while the link between CNV and disease has been established, the relative CNV contribution is not fully understood and the predisposition to disease from CNVs of the general population has not been yet investigated.During my PhD thesis, I worked on several aspects related to CNVs. As l will report in chapter 3, ! was interested in computational methods to detect CNVs from the general population. I had access to the CoLaus dataset, a population-based study with more than 6,000 participants from the Lausanne area. All these individuals were analysed on SNP arrays and extensive clinical information were available. My work explored existing CNV detection methods and I developed a variety of metrics to compare their performance. Since these methods were not producing entirely satisfactory results, I implemented my own method which outperformed two existing methods. I also devised strategies to combine CNVs from different individuals into CNV regions.I was also interested in the clinical impact of CNVs in common disease (chapter 4). Through an international collaboration led by the Centre Hospitalier Universitaire Vaudois (CHUV) and the Imperial College London I was involved as a main data analyst in the investigation of a rare deletion at chromosome 16p11 detected in obese patients. Specifically, we compared 8,456 obese patients and 11,856 individuals from the general population and we found that the deletion was accounting for 0.7% of the morbid obesity cases and was absent in healthy non- obese controls. This highlights the importance of rare variants with strong impact and provides new insights in the design of clinical studies to identify the missing heritability in common disease.Furthermore, I was interested in the detection of somatic copy number alterations (SCNA) and their consequences in cancer (chapter 5). This project was a collaboration initiated by the Ludwig Institute for Cancer Research and involved other groups from the Swiss Institute of Bioinformatics, the CHUV and Universities of Lausanne and Geneva. The focus of my work was to identify genes with altered expression levels within somatic copy number alterations (SCNA) in seven metastatic melanoma ceil lines, using CGH and SNP arrays, RNA-seq, and karyotyping. Very few SCNA genes were shared by even two melanoma samples making it difficult to draw any conclusions at the individual gene level. To overcome this limitation, I used a network-guided analysis to determine whether any pathways, defined by amplified or deleted genes, were common among the samples. Six of the melanoma samples were potentially altered in four pathways and five samples harboured copy-number and expression changes in components of six pathways. In total, this approach identified 28 pathways. Validation with two external, large melanoma datasets confirmed all but three of the detected pathways and demonstrated the utility of network-guided approaches for both large and small datasets analysis.RésuméBien que le génome de deux individus soit similaire à plus de 99.99%, des différences de structure peuvent être observées. Ces différences incluent les polymorphismes simples de nucléotides, les inversions et les changements en nombre de copies (gain ou perte d'ADN). Ces derniers varient de petits événements dits sous-microscopiques (moins de 1kb en taille), appelés CNVs (copy number variants) jusqu'à des événements plus large pouvant affecter des chromosomes entiers. Les petites variations sont généralement sans conséquence pour la cellule, toutefois certaines ont été impliquées dans la prédisposition à certaines maladies, et à des variations phénotypiques dans la population générale. Les réarrangements plus grands (par exemple, une copie additionnelle d'un chromosome appelée communément trisomie) ont des répercutions plus grave pour la santé, comme par exemple dans certains syndromes génomiques et dans le cancer. Les technologies à haut-débit telle les puces à ADN permettent la détection de CNVs à l'échelle du génome humain. La cartographie en 2006 des CNV du génome humain, a suscité un fort intérêt en génétique des populations et en génétique médicale. La détection de différences au sein et entre plusieurs populations est un élément clef pour élucider la contribution possible des CNVs dans les maladies. Toutefois l'analyse du génome reste une tâche difficile, la technologie évolue très rapidement créant de nouveaux besoins pour le développement d'outils, l'amélioration des précédents, et la comparaison des différentes méthodes. De plus, si le lien entre CNV et maladie a été établit, leur contribution précise n'est pas encore comprise. De même que les études sur la prédisposition aux maladies par des CNVs détectés dans la population générale n'ont pas encore été réalisées.Pendant mon doctorat, je me suis concentré sur trois axes principaux ayant attrait aux CNV. Dans le chapitre 3, je détaille mes travaux sur les méthodes d'analyses des puces à ADN. J'ai eu accès aux données du projet CoLaus, une étude de la population de Lausanne. Dans cette étude, le génome de plus de 6000 individus a été analysé avec des puces SNP et de nombreuses informations cliniques ont été récoltées. Pendant mes travaux, j'ai utilisé et comparé plusieurs méthodes de détection des CNVs. Les résultats n'étant pas complètement satisfaisant, j'ai implémenté ma propre méthode qui donne de meilleures performances que deux des trois autres méthodes utilisées. Je me suis aussi intéressé aux stratégies pour combiner les CNVs de différents individus en régions.Je me suis aussi intéressé à l'impact clinique des CNVs dans le cas des maladies génétiques communes (chapitre 4). Ce projet fut possible grâce à une étroite collaboration avec le Centre Hospitalier Universitaire Vaudois (CHUV) et l'Impérial College à Londres. Dans ce projet, j'ai été l'un des analystes principaux et j'ai travaillé sur l'impact clinique d'une délétion rare du chromosome 16p11 présente chez des patients atteints d'obésité. Dans cette collaboration multidisciplinaire, nous avons comparés 8'456 patients atteint d'obésité et 11 '856 individus de la population générale. Nous avons trouvés que la délétion était impliquée dans 0.7% des cas d'obésité morbide et était absente chez les contrôles sains (non-atteint d'obésité). Notre étude illustre l'importance des CNVs rares qui peuvent avoir un impact clinique très important. De plus, ceci permet d'envisager une alternative aux études d'associations pour améliorer notre compréhension de l'étiologie des maladies génétiques communes.Egalement, j'ai travaillé sur la détection d'altérations somatiques en nombres de copies (SCNA) et de leurs conséquences pour le cancer (chapitre 5). Ce projet fut une collaboration initiée par l'Institut Ludwig de Recherche contre le Cancer et impliquant l'Institut Suisse de Bioinformatique, le CHUV et les Universités de Lausanne et Genève. Je me suis concentré sur l'identification de gènes affectés par des SCNAs et avec une sur- ou sous-expression dans des lignées cellulaires dérivées de mélanomes métastatiques. Les données utilisées ont été générées par des puces ADN (CGH et SNP) et du séquençage à haut débit du transcriptome. Mes recherches ont montrées que peu de gènes sont récurrents entre les mélanomes, ce qui rend difficile l'interprétation des résultats. Pour contourner ces limitations, j'ai utilisé une analyse de réseaux pour définir si des réseaux de signalisations enrichis en gènes amplifiés ou perdus, étaient communs aux différents échantillons. En fait, parmi les 28 réseaux détectés, quatre réseaux sont potentiellement dérégulés chez six mélanomes, et six réseaux supplémentaires sont affectés chez cinq mélanomes. La validation de ces résultats avec deux larges jeux de données publiques, a confirmée tous ces réseaux sauf trois. Ceci démontre l'utilité de cette approche pour l'analyse de petits et de larges jeux de données.Résumé grand publicL'avènement de la biologie moléculaire, en particulier ces dix dernières années, a révolutionné la recherche en génétique médicale. Grâce à la disponibilité du génome humain de référence dès 2001, de nouvelles technologies telles que les puces à ADN sont apparues et ont permis d'étudier le génome dans son ensemble avec une résolution dite sous-microscopique jusque-là impossible par les techniques traditionnelles de cytogénétique. Un des exemples les plus importants est l'étude des variations structurales du génome, en particulier l'étude du nombre de copies des gènes. Il était établi dès 1959 avec l'identification de la trisomie 21 par le professeur Jérôme Lejeune que le gain d'un chromosome supplémentaire était à l'origine de syndrome génétique avec des répercussions graves pour la santé du patient. Ces observations ont également été réalisées en oncologie sur les cellules cancéreuses qui accumulent fréquemment des aberrations en nombre de copies (telles que la perte ou le gain d'un ou plusieurs chromosomes). Dès 2004, plusieurs groupes de recherches ont répertorié des changements en nombre de copies dans des individus provenant de la population générale (c'est-à-dire sans symptômes cliniques visibles). En 2006, le Dr. Richard Redon a établi la première carte de variation en nombre de copies dans la population générale. Ces découvertes ont démontrées que les variations dans le génome était fréquentes et que la plupart d'entre elles étaient bénignes, c'est-à-dire sans conséquence clinique pour la santé de l'individu. Ceci a suscité un très grand intérêt pour comprendre les variations naturelles entre individus mais aussi pour mieux appréhender la prédisposition génétique à certaines maladies.Lors de ma thèse, j'ai développé de nouveaux outils informatiques pour l'analyse de puces à ADN dans le but de cartographier ces variations à l'échelle génomique. J'ai utilisé ces outils pour établir les variations dans la population suisse et je me suis consacré par la suite à l'étude de facteurs pouvant expliquer la prédisposition aux maladies telles que l'obésité. Cette étude en collaboration avec le Centre Hospitalier Universitaire Vaudois a permis l'identification d'une délétion sur le chromosome 16 expliquant 0.7% des cas d'obésité morbide. Cette étude a plusieurs répercussions. Tout d'abord elle permet d'effectuer le diagnostique chez les enfants à naître afin de déterminer leur prédisposition à l'obésité. Ensuite ce locus implique une vingtaine de gènes. Ceci permet de formuler de nouvelles hypothèses de travail et d'orienter la recherche afin d'améliorer notre compréhension de la maladie et l'espoir de découvrir un nouveau traitement Enfin notre étude fournit une alternative aux études d'association génétique qui n'ont eu jusqu'à présent qu'un succès mitigé.Dans la dernière partie de ma thèse, je me suis intéressé à l'analyse des aberrations en nombre de copies dans le cancer. Mon choix s'est porté sur l'étude de mélanomes, impliqués dans le cancer de la peau. Le mélanome est une tumeur très agressive, elle est responsable de 80% des décès des cancers de la peau et est souvent résistante aux traitements utilisés en oncologie (chimiothérapie, radiothérapie). Dans le cadre d'une collaboration entre l'Institut Ludwig de Recherche contre le Cancer, l'Institut Suisse de Bioinformatique, le CHUV et les universités de Lausanne et Genève, nous avons séquencés l'exome (les gènes) et le transcriptome (l'expression des gènes) de sept mélanomes métastatiques, effectués des analyses du nombre de copies par des puces à ADN et des caryotypes. Mes travaux ont permis le développement de nouvelles méthodes d'analyses adaptées au cancer, d'établir la liste des réseaux de signalisation cellulaire affectés de façon récurrente chez le mélanome et d'identifier deux cibles thérapeutiques potentielles jusqu'alors ignorées dans les cancers de la peau.
Resumo:
Introduction: Building online courses is a highly time consuming task for teachers of a single university. Universities working alone create high-quality courses but often cannot cover all pathological fields. Moreover this often leads to duplication of contents among universities, representing a big waste of teacher time and energy. We initiated in 2011 a French university network for building mutualized online teaching pathology cases, and this network has been extended in 2012 to Quebec and Switzerland. Method: Twenty French universities (see & for details), University Laval in Quebec and University of Lausanne in Switzerland are associated to this project. One e-learning Moodle platform (http://moodle.sorbonne-paris-cite.fr/) contains texts with URL pointing toward virtual slides that are decentralized in several universities. Each university has the responsibility of its own slide scanning, slide storage and online display with virtual slide viewers. The Moodle website is hosted by PRES Sorbonne Paris Cité, and financial supports for hardware have been obtained from UNF3S (http://www.unf3s.org/) and from PRES Sorbonne Paris Cité. Financial support for international fellowships has been obtained from CFQCU (http://www.cfqcu.org/). Results: The Moodle interface has been explained to pathology teachers using web-based conferences with screen sharing. The teachers added then contents such as clinical cases, selfevaluations and other media organized in several sections by student levels and pathological fields. Contents can be used as online learning or online preparation of subsequent courses in classrooms. In autumn 2013, one resident from Quebec spent 6 weeks in France and Switzerland and created original contents in inflammatory skin pathology. These contents are currently being validated by senior teachers and will be opened to pathology residents in spring 2014. All contents of the website can be accessed for free. Most contents just require anonymous connection but some specific fields, especially those containing pictures obtained from patients who agreed for a teaching use only, require personal identification of the students. Also, students have to register to access Moodle tests. All contents are written in French but one case has been translated into English to illustrate this communication (http://moodle.sorbonne-pariscite.fr/mod/page/view.php?id=261) (use "login as a guest"). The Moodle test module allows many types of shared questions, making it easy to create personalized tests. Contents that are opened to students have been validated by an editorial committee composed of colleagues from the participating institutions. Conclusions: Future developments include other international fellowships, the next one being scheduled for one French resident from May to October 2014 in Quebec, with a study program centered on lung and breast pathology. It must be kept in mind that these e-learning programs highly depend on teachers' time, not only at these early steps but also later to update the contents. We believe that funding resident fellowships for developing online pathological teaching contents is a win-win situation, highly beneficial for the resident who will improve his knowledge and way of thinking, highly beneficial for the teachers who will less worry about access rights or image formats, and finally highly beneficial for the students who will get courses fully adapted to their practice.
Resumo:
In this paper, a hybrid simulation-based algorithm is proposed for the StochasticFlow Shop Problem. The main idea of the methodology is to transform the stochastic problem into a deterministic problem and then apply simulation to the latter. In order to achieve this goal, we rely on Monte Carlo Simulation and an adapted version of a deterministic heuristic. This approach aims to provide flexibility and simplicity due to the fact that it is not constrained by any previous assumption and relies in well-tested heuristics.
Resumo:
3 Summary 3. 1 English The pharmaceutical industry has been facing several challenges during the last years, and the optimization of their drug discovery pipeline is believed to be the only viable solution. High-throughput techniques do participate actively to this optimization, especially when complemented by computational approaches aiming at rationalizing the enormous amount of information that they can produce. In siiico techniques, such as virtual screening or rational drug design, are now routinely used to guide drug discovery. Both heavily rely on the prediction of the molecular interaction (docking) occurring between drug-like molecules and a therapeutically relevant target. Several softwares are available to this end, but despite the very promising picture drawn in most benchmarks, they still hold several hidden weaknesses. As pointed out in several recent reviews, the docking problem is far from being solved, and there is now a need for methods able to identify binding modes with a high accuracy, which is essential to reliably compute the binding free energy of the ligand. This quantity is directly linked to its affinity and can be related to its biological activity. Accurate docking algorithms are thus critical for both the discovery and the rational optimization of new drugs. In this thesis, a new docking software aiming at this goal is presented, EADock. It uses a hybrid evolutionary algorithm with two fitness functions, in combination with a sophisticated management of the diversity. EADock is interfaced with .the CHARMM package for energy calculations and coordinate handling. A validation was carried out on 37 crystallized protein-ligand complexes featuring 11 different proteins. The search space was defined as a sphere of 15 R around the center of mass of the ligand position in the crystal structure, and conversely to other benchmarks, our algorithms was fed with optimized ligand positions up to 10 A root mean square deviation 2MSD) from the crystal structure. This validation illustrates the efficiency of our sampling heuristic, as correct binding modes, defined by a RMSD to the crystal structure lower than 2 A, were identified and ranked first for 68% of the complexes. The success rate increases to 78% when considering the five best-ranked clusters, and 92% when all clusters present in the last generation are taken into account. Most failures in this benchmark could be explained by the presence of crystal contacts in the experimental structure. EADock has been used to understand molecular interactions involved in the regulation of the Na,K ATPase, and in the activation of the nuclear hormone peroxisome proliferatoractivated receptors a (PPARa). It also helped to understand the action of common pollutants (phthalates) on PPARy, and the impact of biotransformations of the anticancer drug Imatinib (Gleevec®) on its binding mode to the Bcr-Abl tyrosine kinase. Finally, a fragment-based rational drug design approach using EADock was developed, and led to the successful design of new peptidic ligands for the a5ß1 integrin, and for the human PPARa. In both cases, the designed peptides presented activities comparable to that of well-established ligands such as the anticancer drug Cilengitide and Wy14,643, respectively. 3.2 French Les récentes difficultés de l'industrie pharmaceutique ne semblent pouvoir se résoudre que par l'optimisation de leur processus de développement de médicaments. Cette dernière implique de plus en plus. de techniques dites "haut-débit", particulièrement efficaces lorsqu'elles sont couplées aux outils informatiques permettant de gérer la masse de données produite. Désormais, les approches in silico telles que le criblage virtuel ou la conception rationnelle de nouvelles molécules sont utilisées couramment. Toutes deux reposent sur la capacité à prédire les détails de l'interaction moléculaire entre une molécule ressemblant à un principe actif (PA) et une protéine cible ayant un intérêt thérapeutique. Les comparatifs de logiciels s'attaquant à cette prédiction sont flatteurs, mais plusieurs problèmes subsistent. La littérature récente tend à remettre en cause leur fiabilité, affirmant l'émergence .d'un besoin pour des approches plus précises du mode d'interaction. Cette précision est essentielle au calcul de l'énergie libre de liaison, qui est directement liée à l'affinité du PA potentiel pour la protéine cible, et indirectement liée à son activité biologique. Une prédiction précise est d'une importance toute particulière pour la découverte et l'optimisation de nouvelles molécules actives. Cette thèse présente un nouveau logiciel, EADock, mettant en avant une telle précision. Cet algorithme évolutionnaire hybride utilise deux pressions de sélections, combinées à une gestion de la diversité sophistiquée. EADock repose sur CHARMM pour les calculs d'énergie et la gestion des coordonnées atomiques. Sa validation a été effectuée sur 37 complexes protéine-ligand cristallisés, incluant 11 protéines différentes. L'espace de recherche a été étendu à une sphère de 151 de rayon autour du centre de masse du ligand cristallisé, et contrairement aux comparatifs habituels, l'algorithme est parti de solutions optimisées présentant un RMSD jusqu'à 10 R par rapport à la structure cristalline. Cette validation a permis de mettre en évidence l'efficacité de notre heuristique de recherche car des modes d'interactions présentant un RMSD inférieur à 2 R par rapport à la structure cristalline ont été classés premier pour 68% des complexes. Lorsque les cinq meilleures solutions sont prises en compte, le taux de succès grimpe à 78%, et 92% lorsque la totalité de la dernière génération est prise en compte. La plupart des erreurs de prédiction sont imputables à la présence de contacts cristallins. Depuis, EADock a été utilisé pour comprendre les mécanismes moléculaires impliqués dans la régulation de la Na,K ATPase et dans l'activation du peroxisome proliferatoractivated receptor a (PPARa). Il a également permis de décrire l'interaction de polluants couramment rencontrés sur PPARy, ainsi que l'influence de la métabolisation de l'Imatinib (PA anticancéreux) sur la fixation à la kinase Bcr-Abl. Une approche basée sur la prédiction des interactions de fragments moléculaires avec protéine cible est également proposée. Elle a permis la découverte de nouveaux ligands peptidiques de PPARa et de l'intégrine a5ß1. Dans les deux cas, l'activité de ces nouveaux peptides est comparable à celles de ligands bien établis, comme le Wy14,643 pour le premier, et le Cilengitide (PA anticancéreux) pour la seconde.
Resumo:
In this paper, a hybrid simulation-based algorithm is proposed for the StochasticFlow Shop Problem. The main idea of the methodology is to transform the stochastic problem into a deterministic problem and then apply simulation to the latter. In order to achieve this goal, we rely on Monte Carlo Simulation and an adapted version of a deterministic heuristic. This approach aims to provide flexibility and simplicity due to the fact that it is not constrained by any previous assumption and relies in well-tested heuristics.
Resumo:
We investigate contributions to the provision of public goods on a network when efficient provision requires the formation of a star network. We provide a theoretical analysis and study behavior is a controlled laboratory experiment. In a 2x2 design, we examine the effects of group size and the presence of (social) benefits for incoming links. We find that social benefits are highly important. They facilitate convergence to equilibrium networks and enhance the stability and efficiency of the outcome. Moreover, in large groups social benefits encourage the formation of superstars: star networks in which the core contributes more than expected in the stage-game equilibrium. We show that this result is predicted by a repeated game equilibrium.
Resumo:
PURPOSE: Evidence has accumulated in recent years suggestive of a genetic basis for a susceptibility to the development of radiation injury after cancer radiotherapy. The purpose of this study was to assess whether patients with severe radiation-induced sequelae (RIS; i.e., National Cancer Institute/CTCv3.0 grade, > or =3) display both a low capacity of radiation-induced CD8 lymphocyte apoptosis (RILA) in vitro and possess certain single nucleotide polymorphisms (SNP) located in candidate genes associated with the response of cells to radiation. EXPERIMENTAL DESIGN: DNA was isolated from blood samples obtained from patients (n = 399) included in the Swiss prospective study evaluating the predictive effect of in vitro RILA and RIS. SNPs in the ATM, SOD2, XRCC1, XRCC3, TGFB1, and RAD21 genes were screened in patients who experienced severe RIS (group A, n = 16) and control subjects who did not manifest any evidence of RIS (group B, n = 18). RESULTS: Overall, 13 and 21 patients were found to possess a total of <4 and > or =4 SNPs in the candidate genes. The median (range) RILA in group A was 9.4% (5.3-16.5) and 94% (95% confidence interval, 70-100) of the patients (15 of 16) had > or =4 SNPs. In group B, median (range) RILA was 25.7% (20.2-43.2) and 33% (95% confidence interval, 13-59) of patients (6 of 18) had > or =4 SNPs (P < 0.001). CONCLUSIONS: The results of this study suggest that patients with severe RIS possess 4 or more SNPs in candidate genes and low radiation-induced CD8 lymphocyte apoptosis in vitro.
Resumo:
Correlative fluorescence and electron microscopy has become an indispensible tool for research in cell biology. The integrated Laser and Electron Microscope (iLEM) combines a Fluorescence Microscope (FM) and a Transmission Electron Microscope (TEM) within one set-up. This unique imaging tool allows for rapid identification of a region of interest with the FM, and subsequent high resolution TEM imaging of this area. Sample preparation is one of the major challenges in correlative microscopy of a single specimen; it needs to be apt for both FM and TEM imaging. For iLEM, the performance of the fluorescent probe should not be impaired by the vacuum of the TEM. In this technical note, we have compared the fluorescence intensity of six fluorescent probes in a dry, oxygen free environment relative to their performance in water. We demonstrate that the intensity of some fluorophores is strongly influenced by its surroundings, which should be taken into account in the design of the experiment. Furthermore, a freeze-substitution and Lowicryl resin embedding protocol is described that yields excellent membrane contrast in the TEM but prevents quenching of the fluorescent immuno-labeling. The embedding protocol results in a single specimen preparation procedure that performs well in both FM and TEM. Such procedures are not only essential for the iLEM, but also of great value to other correlative microscopy approaches.