849 resultados para RM extended algorithm


Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this paper, a hybrid simulation-based algorithm is proposed for the StochasticFlow Shop Problem. The main idea of the methodology is to transform the stochastic problem into a deterministic problem and then apply simulation to the latter. In order to achieve this goal, we rely on Monte Carlo Simulation and an adapted version of a deterministic heuristic. This approach aims to provide flexibility and simplicity due to the fact that it is not constrained by any previous assumption and relies in well-tested heuristics.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

3 Summary 3. 1 English The pharmaceutical industry has been facing several challenges during the last years, and the optimization of their drug discovery pipeline is believed to be the only viable solution. High-throughput techniques do participate actively to this optimization, especially when complemented by computational approaches aiming at rationalizing the enormous amount of information that they can produce. In siiico techniques, such as virtual screening or rational drug design, are now routinely used to guide drug discovery. Both heavily rely on the prediction of the molecular interaction (docking) occurring between drug-like molecules and a therapeutically relevant target. Several softwares are available to this end, but despite the very promising picture drawn in most benchmarks, they still hold several hidden weaknesses. As pointed out in several recent reviews, the docking problem is far from being solved, and there is now a need for methods able to identify binding modes with a high accuracy, which is essential to reliably compute the binding free energy of the ligand. This quantity is directly linked to its affinity and can be related to its biological activity. Accurate docking algorithms are thus critical for both the discovery and the rational optimization of new drugs. In this thesis, a new docking software aiming at this goal is presented, EADock. It uses a hybrid evolutionary algorithm with two fitness functions, in combination with a sophisticated management of the diversity. EADock is interfaced with .the CHARMM package for energy calculations and coordinate handling. A validation was carried out on 37 crystallized protein-ligand complexes featuring 11 different proteins. The search space was defined as a sphere of 15 R around the center of mass of the ligand position in the crystal structure, and conversely to other benchmarks, our algorithms was fed with optimized ligand positions up to 10 A root mean square deviation 2MSD) from the crystal structure. This validation illustrates the efficiency of our sampling heuristic, as correct binding modes, defined by a RMSD to the crystal structure lower than 2 A, were identified and ranked first for 68% of the complexes. The success rate increases to 78% when considering the five best-ranked clusters, and 92% when all clusters present in the last generation are taken into account. Most failures in this benchmark could be explained by the presence of crystal contacts in the experimental structure. EADock has been used to understand molecular interactions involved in the regulation of the Na,K ATPase, and in the activation of the nuclear hormone peroxisome proliferatoractivated receptors a (PPARa). It also helped to understand the action of common pollutants (phthalates) on PPARy, and the impact of biotransformations of the anticancer drug Imatinib (Gleevec®) on its binding mode to the Bcr-Abl tyrosine kinase. Finally, a fragment-based rational drug design approach using EADock was developed, and led to the successful design of new peptidic ligands for the a5ß1 integrin, and for the human PPARa. In both cases, the designed peptides presented activities comparable to that of well-established ligands such as the anticancer drug Cilengitide and Wy14,643, respectively. 3.2 French Les récentes difficultés de l'industrie pharmaceutique ne semblent pouvoir se résoudre que par l'optimisation de leur processus de développement de médicaments. Cette dernière implique de plus en plus. de techniques dites "haut-débit", particulièrement efficaces lorsqu'elles sont couplées aux outils informatiques permettant de gérer la masse de données produite. Désormais, les approches in silico telles que le criblage virtuel ou la conception rationnelle de nouvelles molécules sont utilisées couramment. Toutes deux reposent sur la capacité à prédire les détails de l'interaction moléculaire entre une molécule ressemblant à un principe actif (PA) et une protéine cible ayant un intérêt thérapeutique. Les comparatifs de logiciels s'attaquant à cette prédiction sont flatteurs, mais plusieurs problèmes subsistent. La littérature récente tend à remettre en cause leur fiabilité, affirmant l'émergence .d'un besoin pour des approches plus précises du mode d'interaction. Cette précision est essentielle au calcul de l'énergie libre de liaison, qui est directement liée à l'affinité du PA potentiel pour la protéine cible, et indirectement liée à son activité biologique. Une prédiction précise est d'une importance toute particulière pour la découverte et l'optimisation de nouvelles molécules actives. Cette thèse présente un nouveau logiciel, EADock, mettant en avant une telle précision. Cet algorithme évolutionnaire hybride utilise deux pressions de sélections, combinées à une gestion de la diversité sophistiquée. EADock repose sur CHARMM pour les calculs d'énergie et la gestion des coordonnées atomiques. Sa validation a été effectuée sur 37 complexes protéine-ligand cristallisés, incluant 11 protéines différentes. L'espace de recherche a été étendu à une sphère de 151 de rayon autour du centre de masse du ligand cristallisé, et contrairement aux comparatifs habituels, l'algorithme est parti de solutions optimisées présentant un RMSD jusqu'à 10 R par rapport à la structure cristalline. Cette validation a permis de mettre en évidence l'efficacité de notre heuristique de recherche car des modes d'interactions présentant un RMSD inférieur à 2 R par rapport à la structure cristalline ont été classés premier pour 68% des complexes. Lorsque les cinq meilleures solutions sont prises en compte, le taux de succès grimpe à 78%, et 92% lorsque la totalité de la dernière génération est prise en compte. La plupart des erreurs de prédiction sont imputables à la présence de contacts cristallins. Depuis, EADock a été utilisé pour comprendre les mécanismes moléculaires impliqués dans la régulation de la Na,K ATPase et dans l'activation du peroxisome proliferatoractivated receptor a (PPARa). Il a également permis de décrire l'interaction de polluants couramment rencontrés sur PPARy, ainsi que l'influence de la métabolisation de l'Imatinib (PA anticancéreux) sur la fixation à la kinase Bcr-Abl. Une approche basée sur la prédiction des interactions de fragments moléculaires avec protéine cible est également proposée. Elle a permis la découverte de nouveaux ligands peptidiques de PPARa et de l'intégrine a5ß1. Dans les deux cas, l'activité de ces nouveaux peptides est comparable à celles de ligands bien établis, comme le Wy14,643 pour le premier, et le Cilengitide (PA anticancéreux) pour la seconde.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this paper, a hybrid simulation-based algorithm is proposed for the StochasticFlow Shop Problem. The main idea of the methodology is to transform the stochastic problem into a deterministic problem and then apply simulation to the latter. In order to achieve this goal, we rely on Monte Carlo Simulation and an adapted version of a deterministic heuristic. This approach aims to provide flexibility and simplicity due to the fact that it is not constrained by any previous assumption and relies in well-tested heuristics.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Cocaine-induced neuroadaptation of stress-related circuitry and increased access to cocaine each putatively contribute to the transition from cocaine use to cocaine dependence. The present study tested the hypothesis that rats receiving extended versus brief daily access to cocaine would exhibit regional differences in levels of the stress-regulatory neuropeptide corticotropin-releasing factor (CRF). A secondary goal was to explore how CRF levels change in relation to the time since cocaine self-administration. Male Wistar rats acquired operant self-administration of cocaine and were assigned to receive daily long access (6 hours/day, LgA, n = 20) or short access (1 hour/day, ShA, n = 18) to intravenous cocaine self-administration (fixed ratio 1, ∼0.50 mg/kg/infusion). After at least 3 weeks, tissue CRF immunoreactivity was measured at one of three timepoints: pre-session, post-session or 3 hours post-session. LgA, but not ShA, rats showed increased total session and first-hour cocaine intake. CRF immunoreactivity increased within the dorsal raphe (DR) and basolateral, but not central, nucleus of the amygdala (BLA, CeA) of ShA rats from pre-session to 3 hours post-session. In LgA rats, CRF immunoreactivity increased from pre-session to 3 hours post-session within the CeA and DR but tended to decrease in the BLA. LgA rats showed higher CRF levels than ShA rats in the DR and, pre-session, in the BLA. Thus, voluntary cocaine intake engages stress-regulatory CRF systems of the DR and amygdala. Increased availability of cocaine promotes greater tissue CRF levels in these extrahypothalamic brain regions, changes associated here with a model of cocaine dependence.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

STUDY DESIGN:: Retrospective database- query to identify all anterior spinal approaches. OBJECTIVES:: To assess all patients with pharyngo-cutaneous fistulas after anterior cervical spine surgery. SUMMARY OF BACKGROUND DATA:: Patients treated in University of Heidelberg Spine Medical Center, Spinal Cord Injury Unit and Department of Otolaryngology (Germany), between 2005 and 2011 with the diagnosis of pharyngo-cutaneous fistulas. METHODS:: We conducted a retrospective study on 5 patients between 2005 and 2011 with PCF after ACSS, their therapy management and outcome according to radiologic data and patient charts. RESULTS:: Upon presentation 4 patients were paraplegic. 2 had PCF arising from one piriform sinus, two patients from the posterior pharyngeal wall and piriform sinus combined and one patient only from the posterior pharyngeal wall. 2 had previous unsuccessful surgical repair elsewhere and 1 had prior radiation therapy. In 3 patients speech and swallowing could be completely restored, 2 patients died. Both were paraplegic. The patients needed an average of 2-3 procedures for complete functional recovery consisting of primary closure with various vascularised regional flaps and refining laser procedures supplemented with negative pressure wound therapy where needed. CONCLUSION:: Based on our experience we are able to provide a treatment algorithm that indicates that chronic as opposed to acute fistulas require a primary surgical closure combined with a vascularised flap that should be accompanied by the immediate application of a negative pressure wound therapy. We also conclude that particularly in paraplegic patients suffering this complication the risk for a fatal outcome is substantial.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

An overview of ocular implants with therapeutic application potentials is provided. Various types of implants can be used as slow release devices delivering locally the needed drug for an extended period of time. Thus, multiple periocular or intraocular injections of the drug can be circumvented and secondary complications minimized. The various compositions of polymers fulfilling specific delivery goals are described. Several of these implants are undergoing clinical trials while a few are already commercialized. Despite the paramount progress in design, safety and efficacy, the place of these implants in our clinical therapeutic arsenal remains limited. Miniaturization of the implants allowing for their direct injection without the need for a complicated surgery is a necessary development avenue. Particulate systems which can be engineered to target specifically certain cells or tissues are another promising alternative. For ocular diseases affecting the choroid and outer retina, transscleral or intrasscleral implants are gaining momentum.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Introduction New evidence from randomized controlled and etiology of fever studies, the availability of reliable RDT for malaria, and novel technologies call for revision of the IMCI strategy. We developed a new algorithm based on (i) a systematic review of published studies assessing the safety and appropriateness of RDT and antibiotic prescription, (ii) results from a clinical and microbiological investigation of febrile children aged <5 years, (iii) international expert IMCI opinions. The aim of this study was to assess the safety of the new algorithm among patients in urban and rural areas of Tanzania.Materials and Methods The design was a controlled noninferiority study. Enrolled children aged 2-59 months with any illness were managed either by a study clinician using the new Almanach algorithm (two intervention health facilities), or clinicians using standard practice, including RDT (two control HF). At day 7 and day 14, all patients were reassessed. Patients who were ill in between or not cured at day 14 were followed until recovery or death. Primary outcome was rate of complications, secondary outcome rate of antibiotic prescriptions.Results 1062 children were recruited. Main diagnoses were URTI 26%, pneumonia 19% and gastroenteritis (9.4%). 98% (531/541) were cured at D14 in the Almanach arm and 99.6% (519/521) in controls. Rate of secondary hospitalization was 0.2% in each. One death occurred in controls. None of the complications was due to withdrawal of antibiotics or antimalarials at day 0. Rate of antibiotic use was 19% in the Almanach arm and 84% in controls.Conclusion Evidence suggests that the new algorithm, primarily aimed at the rational use of drugs, is as safe as standard practice and leads to a drastic reduction of antibiotic use. The Almanach is currently being tested for clinician adherence to proposed procedures when used on paper or a mobile phone

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A common operation in wireless ad hoc networks is the flooding of broadcast messages to establish network topologies and routing tables. The flooding of broadcast messages is, however, a resource consuming process. It might require the retransmission of messages by most network nodes. It is, therefore, very important to optimize this operation. In this paper, we first analyze the multipoint relaying (MPR) flooding mechanism used by the Optimized Link State Routing (OLSR) protocol to distribute topology control (TC) messages among all the system nodes. We then propose a new flooding method, based on the fusion of two key concepts: distance-enabled multipoint relaying and connected dominating set (CDS) flooding. We present experimental simulationsthat show that our approach improves the performance of previous existing proposals.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Este trabajo presenta un Algoritmo Genético (GA) del problema de secuenciar unidades en una línea de producción. Se tiene en cuenta la posibilidad de cambiar la secuencia de piezas mediante estaciones con acceso a un almacén intermedio o centralizado. El acceso al almacén además está restringido, debido al tamaño de las piezas.AbstractThis paper presents a Genetic Algorithm (GA) for the problem of sequencing in a mixed model non-permutation flowshop. Resequencingis permitted where stations have access to intermittent or centralized resequencing buffers. The access to a buffer is restricted by the number of available buffer places and the physical size of the products.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

BACKGROUND: Lactate protects mice against the ischaemic damage resulting from transient middle cerebral artery occlusion (MCAO) when administered intracerebroventricularly at reperfusion, yielding smaller lesion sizes and a better neurological outcome 48 h after ischaemia. We have now tested whether the beneficial effect of lactate is long-lasting and if lactate can be administered intravenously. METHODS: Male ICR-CD1 mice were subjected to 15-min suture MCAO under xylazine + ketamine anaesthesia. Na L-lactate (2 µl of 100 mmol/l) or vehicle was administered intracerebroventricularly at reperfusion. The neurological deficit was evaluated using a composite deficit score based on the neurological score, the rotarod test and the beam walking test. Mice were sacrificed at 14 days. In a second set of experiments, Na L-lactate (1 µmol/g body weight) was administered intravenously into the tail vein at reperfusion. The neurological deficit and the lesion volume were measured at 48 h. RESULTS: Intracerebroventricularly injected lactate induced sustained neuroprotection shown by smaller neurological deficits at 7 days (median = 0, min = 0, max = 3, n = 7 vs. median = 2, min = 1, max = 4.5, n = 5, p < 0.05) and 14 days after ischaemia (median = 0, min = 0, max = 3, n = 7 vs. median = 3, min = 0.5, max = 3, n = 7, p = 0.05). Reduced tissue damage was demonstrated by attenuated hemispheric atrophy at 14 days (1.3 ± 4.0 mm(3), n = 7 vs. 12.1 ± 3.8 mm(3), n = 5, p < 0.05) in lactate-treated animals. Systemic intravenous lactate administration was also neuroprotective and attenuated the deficit (median = 1, min = 0, max = 2.5, n = 12) compared to vehicle treatment (median = 1.5, min = 1, max = 8, n = 12, p < 0.05) as well as the lesion volume at 48 h (13.7 ± 12.2 mm(3), n = 12 vs. 29.6 ± 25.4 mm(3), n = 12, p < 0.05). CONCLUSIONS: The beneficial effect of lactate is long-lasting: lactate protects the mouse brain against ischaemic damage when supplied intracerebroventricularly during reperfusion with behavioural and histological benefits persisting 2 weeks after ischaemia. Importantly, lactate also protects after systemic intravenous administration, a more suitable route of administration in a clinical emergency setting. These findings provide further steps to bring this physiological, commonly available and inexpensive neuroprotectant closer to clinical translation for stroke.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Tutkielma keskittyy lisäämään investointiarviointiprosessien rationaalisuutta strategisten investointien arvioinnissa duopoli- / oligopolimarkkinoilla. Tutkielman päätavoitteena on selvittää kuinka peliteorialla laajennettu reaalioptioperusteinen investointien arviointimenetelmä, laajennettu reaalioptiokehikko, voisi mahdollisesti parantaa analyysien tarkkuutta. Tutkimus lähestyy ongelmaa investoinnin ajoituksen sekä todellisten investoinnin arvoattribuuttien riippuvuuksien kautta. Laajennettu reaalioptiokehikko on investointien analysointi- ja johtamistyökalu, joka tarjoaa osittain rajoitetun (sisältää tällä hetkellä ainoastaan parametrisen ja peliteoreettisen epävarmuuden) optimaalisen arvovälin investoinnin todellisesta arvosta. Kehikossa, ROA kartoittaa mahdolliset strategiset hyödyt tunnistamalla investointiinliittyvät eri optiot ja epävarmuudet, peliteoria korostaa ympäristön luomia paineita investointiin liittyvän epävarmuuden hallitsemisessa. Laajennettu reaalioptiokehikko tarjoaa rationaalisemman arvion strategisen investoinnin arvosta, koska se yhdistää johdonmukaisemmin option toteutuksen ja siten myös optioiden aika-arvon, yrityksen todellisiin rajoitettuihin (rajoituksena muiden markkinatoimijoiden toimet) polkuriippuvaisiin kyvykkyyksiin.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Tutkimuksen tavoitteena oli tutkia yrityksen rajoja laajennetun transaktiokustannusteorian näkökulmasta. Tutkimus oli empiirinen tutkimus, jossa tutkittiin viittä toimialaa. Tutkimuksen tavoitteena oli verrata paperiteollisuutta teräs-, kemian-, ICT- ja energiateollisuuteen. Aineisto empiiriseen osioon kerättiin puolistrukturoiduilla teemahaastatteluilla. Tutkimus osoitti, että laajennettu transaktiokustannusteoria soveltuu hyvinyrityksen rajojen määrittelyyn. Staattinen transaktiokustannusteorian selitysaste ei ole riittävä, joten dynaaminen laajennus on tarpeellinen. Tutkimuksessa ilmeni, että paperiteollisuudella verrattuna muihin toimialoihin on suurimmat haasteet tehokkaiden rajojen määrittämisessä.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Abstract The main objective of this work is to show how the choice of the temporal dimension and of the spatial structure of the population influences an artificial evolutionary process. In the field of Artificial Evolution we can observe a common trend in synchronously evolv¬ing panmictic populations, i.e., populations in which any individual can be recombined with any other individual. Already in the '90s, the works of Spiessens and Manderick, Sarma and De Jong, and Gorges-Schleuter have pointed out that, if a population is struc¬tured according to a mono- or bi-dimensional regular lattice, the evolutionary process shows a different dynamic with respect to the panmictic case. In particular, Sarma and De Jong have studied the selection pressure (i.e., the diffusion of a best individual when the only selection operator is active) induced by a regular bi-dimensional structure of the population, proposing a logistic modeling of the selection pressure curves. This model supposes that the diffusion of a best individual in a population follows an exponential law. We show that such a model is inadequate to describe the process, since the growth speed must be quadratic or sub-quadratic in the case of a bi-dimensional regular lattice. New linear and sub-quadratic models are proposed for modeling the selection pressure curves in, respectively, mono- and bi-dimensional regu¬lar structures. These models are extended to describe the process when asynchronous evolutions are employed. Different dynamics of the populations imply different search strategies of the resulting algorithm, when the evolutionary process is used to solve optimisation problems. A benchmark of both discrete and continuous test problems is used to study the search characteristics of the different topologies and updates of the populations. In the last decade, the pioneering studies of Watts and Strogatz have shown that most real networks, both in the biological and sociological worlds as well as in man-made structures, have mathematical properties that set them apart from regular and random structures. In particular, they introduced the concepts of small-world graphs, and they showed that this new family of structures has interesting computing capabilities. Populations structured according to these new topologies are proposed, and their evolutionary dynamics are studied and modeled. We also propose asynchronous evolutions for these structures, and the resulting evolutionary behaviors are investigated. Many man-made networks have grown, and are still growing incrementally, and explanations have been proposed for their actual shape, such as Albert and Barabasi's preferential attachment growth rule. However, many actual networks seem to have undergone some kind of Darwinian variation and selection. Thus, how these networks might have come to be selected is an interesting yet unanswered question. In the last part of this work, we show how a simple evolutionary algorithm can enable the emrgence o these kinds of structures for two prototypical problems of the automata networks world, the majority classification and the synchronisation problems. Synopsis L'objectif principal de ce travail est de montrer l'influence du choix de la dimension temporelle et de la structure spatiale d'une population sur un processus évolutionnaire artificiel. Dans le domaine de l'Evolution Artificielle on peut observer une tendence à évoluer d'une façon synchrone des populations panmictiques, où chaque individu peut être récombiné avec tout autre individu dans la population. Déjà dans les année '90, Spiessens et Manderick, Sarma et De Jong, et Gorges-Schleuter ont observé que, si une population possède une structure régulière mono- ou bi-dimensionnelle, le processus évolutionnaire montre une dynamique différente de celle d'une population panmictique. En particulier, Sarma et De Jong ont étudié la pression de sélection (c-à-d la diffusion d'un individu optimal quand seul l'opérateur de sélection est actif) induite par une structure régulière bi-dimensionnelle de la population, proposant une modélisation logistique des courbes de pression de sélection. Ce modèle suppose que la diffusion d'un individu optimal suit une loi exponentielle. On montre que ce modèle est inadéquat pour décrire ce phénomène, étant donné que la vitesse de croissance doit obéir à une loi quadratique ou sous-quadratique dans le cas d'une structure régulière bi-dimensionnelle. De nouveaux modèles linéaires et sous-quadratique sont proposés pour des structures mono- et bi-dimensionnelles. Ces modèles sont étendus pour décrire des processus évolutionnaires asynchrones. Différentes dynamiques de la population impliquent strategies différentes de recherche de l'algorithme résultant lorsque le processus évolutionnaire est utilisé pour résoudre des problèmes d'optimisation. Un ensemble de problèmes discrets et continus est utilisé pour étudier les charactéristiques de recherche des différentes topologies et mises à jour des populations. Ces dernières années, les études de Watts et Strogatz ont montré que beaucoup de réseaux, aussi bien dans les mondes biologiques et sociologiques que dans les structures produites par l'homme, ont des propriétés mathématiques qui les séparent à la fois des structures régulières et des structures aléatoires. En particulier, ils ont introduit la notion de graphe sm,all-world et ont montré que cette nouvelle famille de structures possède des intéressantes propriétés dynamiques. Des populations ayant ces nouvelles topologies sont proposés, et leurs dynamiques évolutionnaires sont étudiées et modélisées. Pour des populations ayant ces structures, des méthodes d'évolution asynchrone sont proposées, et la dynamique résultante est étudiée. Beaucoup de réseaux produits par l'homme se sont formés d'une façon incrémentale, et des explications pour leur forme actuelle ont été proposées, comme le preferential attachment de Albert et Barabàsi. Toutefois, beaucoup de réseaux existants doivent être le produit d'un processus de variation et sélection darwiniennes. Ainsi, la façon dont ces structures ont pu être sélectionnées est une question intéressante restée sans réponse. Dans la dernière partie de ce travail, on montre comment un simple processus évolutif artificiel permet à ce type de topologies d'émerger dans le cas de deux problèmes prototypiques des réseaux d'automates, les tâches de densité et de synchronisation.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Adaptació de l'algorisme de Kumar per resoldre sistemes d'equacions amb matrius de Toeplitz sobre els reals a cossos finits en un temps 0 (n log n).