902 resultados para Dynamic search fireworks algorithm with covariance mutation
Resumo:
Multiple osteochondromas is an autosomal dominant skeletal disorder characterized by the formation of multiple cartilage-capped tumours. Two causal genes have been identified, EXT1 and EXT2, which account for 65% and 30% of cases, respectively. We have undertaken a mutation analysis of the EXT1 and EXT2 genes in 39 unrelated Spanish patients, most of them with moderate phenotype, and looked for genotype-phenotype correlations. We found the mutant allele in 37 patients, 29 in EXT1 and 8 in EXT2. Five of the EXT1 mutations were deletions identified by MLPA. Two cases of mosaicism were documented. We detected a lower number of exostoses in patients with missense mutation versus other kinds of mutations. In conclusion, we found a mutation in EXT1 or in EXT2 in 95% of the Spanish patients. Eighteen of the mutations were novel.
Resumo:
Abstract One of the most important issues in molecular biology is to understand regulatory mechanisms that control gene expression. Gene expression is often regulated by proteins, called transcription factors which bind to short (5 to 20 base pairs),degenerate segments of DNA. Experimental efforts towards understanding the sequence specificity of transcription factors is laborious and expensive, but can be substantially accelerated with the use of computational predictions. This thesis describes the use of algorithms and resources for transcriptionfactor binding site analysis in addressing quantitative modelling, where probabilitic models are built to represent binding properties of a transcription factor and can be used to find new functional binding sites in genomes. Initially, an open-access database(HTPSELEX) was created, holding high quality binding sequences for two eukaryotic families of transcription factors namely CTF/NF1 and LEFT/TCF. The binding sequences were elucidated using a recently described experimental procedure called HTP-SELEX, that allows generation of large number (> 1000) of binding sites using mass sequencing technology. For each HTP-SELEX experiments we also provide accurate primary experimental information about the protein material used, details of the wet lab protocol, an archive of sequencing trace files, and assembled clone sequences of binding sequences. The database also offers reasonably large SELEX libraries obtained with conventional low-throughput protocols.The database is available at http://wwwisrec.isb-sib.ch/htpselex/ and and ftp://ftp.isrec.isb-sib.ch/pub/databases/htpselex. The Expectation-Maximisation(EM) algorithm is one the frequently used methods to estimate probabilistic models to represent the sequence specificity of transcription factors. We present computer simulations in order to estimate the precision of EM estimated models as a function of data set parameters(like length of initial sequences, number of initial sequences, percentage of nonbinding sequences). We observed a remarkable robustness of the EM algorithm with regard to length of training sequences and the degree of contamination. The HTPSELEX database and the benchmarked results of the EM algorithm formed part of the foundation for the subsequent project, where a statistical framework called hidden Markov model has been developed to represent sequence specificity of the transcription factors CTF/NF1 and LEF1/TCF using the HTP-SELEX experiment data. The hidden Markov model framework is capable of both predicting and classifying CTF/NF1 and LEF1/TCF binding sites. A covariance analysis of the binding sites revealed non-independent base preferences at different nucleotide positions, providing insight into the binding mechanism. We next tested the LEF1/TCF model by computing binding scores for a set of LEF1/TCF binding sequences for which relative affinities were determined experimentally using non-linear regression. The predicted and experimentally determined binding affinities were in good correlation.
Resumo:
Evoluutioalgoritmit ovat viime vuosina osoittautuneet tehokkaiksi menetelmiksi globaalien optimointitehtävien ratkaisuun. Niiden vahvuutena on etenkin yleiskäyttöisyys ja kyky löytää globaali ratkaisu juuttumatta optimoitavan tavoitefunktion paikallisiin optimikohtiin. Tässä työssä on tavoitteena kehittää uusi, normaalijakaumaan perustuva mutaatio-operaatio differentiaalievoluutioalgoritmiin, joka on eräs uusimmista evoluutiopohjaisista optimointialgoritmeista. Menetelmän oletetaan vähentävän entisestään sekä populaation ennenaikaisen suppenemisen, että algoritmin tilojen juuttumisen riskiä ja se on teoreettisesti osoitettavissa suppenevaksi. Tämä ei päde alkuperäisen differentiaalievoluution tapauksessa, koska on voitu osoittaa, että sen tilanmuutokset voivat pienellä todennäköisyydellä juuttua. Työssä uuden menetelmän toimintaa tarkastellaan kokeellisesti käyttäen testiongelmina monirajoiteongelmia. Rajoitefunktioiden käsittelyyn käytetään Jouni Lampisen kehittämää, Pareto-optimaalisuuden periaatteeseen perustuvaa menetelmää. Samalla saadaan kerättyä lisää kokeellista näyttöä myös tämän menetelmän toiminnasta. Kaikki käytetyt testiongelmat kyettiin ratkaisemaan sekä alkuperäisellä differentiaalievoluutiolla, että uutta mutaatio-operaatiota käyttävällä versiolla. Uusi menetelmä osoittautui kuitenkin luotettavammaksi sellaisissa tapauksissa, joissa alkuperäisellä algoritmilla oli vaikeuksia. Lisäksi useimmat ongelmat kyettiin ratkaisemaan luotettavasti pienemmällä populaation koolla kuin alkuperäistä differentiaalievoluutiota käytettäessä. Uuden menetelmän käyttö myös mahdollistaa paremmin sellaisten kontrolliparametrien käytön, joilla hausta saadaan rotaatioinvariantti. Laskennallisesti uusi menetelmä on hieman alkuperäistä differentiaalievoluutiota raskaampi ja se tarvitsee yhden kontrolliparametrin enemmän. Uusille kontrolliparametreille määritettiin kuitenkin mahdollisimman yleiskäyttöiset arvot, joita käyttämällä on mahdollista ratkaista suuri joukko erilaisia ongelmia.
Resumo:
OBJECTIVE: To develop disease-specific recommendations for the diagnosis and management of eosinophilic granulomatosis with polyangiitis (Churg-Strauss syndrome) (EGPA). METHODS: The EGPA Consensus Task Force experts comprised 8 pulmonologists, 6 internists, 4 rheumatologists, 3 nephrologists, 1 pathologist and 1 allergist from 5 European countries and the USA. Using a modified Delphi process, a list of 40 questions was elaborated by 2 members and sent to all participants prior to the meeting. Concurrently, an extensive literature search was undertaken with publications assigned with a level of evidence according to accepted criteria. Drafts of the recommendations were circulated for review to all members until final consensus was reached. RESULTS: Twenty-two recommendations concerning the diagnosis, initial evaluation, treatment and monitoring of EGPA patients were established. The relevant published information on EGPA, antineutrophil-cytoplasm antibody-associated vasculitides, hypereosinophilic syndromes and eosinophilic asthma supporting these recommendations was also reviewed. DISCUSSION: These recommendations aim to give physicians tools for effective and individual management of EGPA patients, and to provide guidance for further targeted research.
Resumo:
PURPOSE: The MOSAIC (Multicenter International Study of Oxaliplatin/Fluorouracil/Leucovorin in the Adjuvant Treatment of Colon Cancer) study has demonstrated 3-year disease-free survival (DFS) and 6-year overall survival (OS) benefit of adjuvant oxaliplatin in stage II to III resected colon cancer. This update presents 10-year OS and OS and DFS by mismatch repair (MMR) status and BRAF mutation. METHODS: Survival actualization after 10-year follow-up was performed in 2,246 patients with resected stage II to III colon cancer. We assessed MMR status and BRAF mutation in 1,008 formalin-fixed paraffin-embedded specimens. RESULTS: After a median follow-up of 9.5 years, 10-year OS rates in the bolus/infusional fluorouracil plus leucovorin (LV5FU2) and LV5FU2 plus oxaliplatin (FOLFOX4) arms were 67.1% versus 71.7% (hazard ratio [HR], 0.85; P = .043) in the whole population, 79.5% versus 78.4% for stage II (HR, 1.00; P = .980), and 59.0% versus 67.1% for stage III (HR, 0.80; P = .016) disease. Ninety-five patients (9.4%) had MMR-deficient (dMMR) tumors, and 94 (10.4%) had BRAF mutation. BRAF mutation was not prognostic for OS (P = .965), but dMMR was an independent prognostic factor (HR, 2.02; 95% CI, 1.15 to 3.55; P = .014). HRs for DFS and OS benefit in the FOLFOX4 arm were 0.48 (95% CI, 0.20 to 1.12) and 0.41 (95% CI, 0.16 to 1.07), respectively, in patients with stage II to III dMMR and 0.50 (95% CI, 0.25 to 1.00) and 0.66 (95% CI, 0.31 to 1.42), respectively, in those with BRAF mutation. CONCLUSION: The OS benefit of oxaliplatin-based adjuvant chemotherapy, increasing over time and with the disease severity, was confirmed at 10 years in patients with stage II to III colon cancer. These updated results support the use of FOLFOX in patients with stage III disease, including those with dMMR or BRAF mutation.
Resumo:
Des dels inicis dels ordinadors com a màquines programables, l’home ha intentat dotar-los de certa intel•ligència per tal de pensar o raonar el més semblant possible als humans. Un d’aquests intents ha sigut fer que la màquina sigui capaç de pensar de tal manera que estudiï jugades i guanyi partides d’escacs. En l’actualitat amb els actuals sistemes multi tasca, orientat a objectes i accés a memòria i gràcies al potent hardware del que disposem, comptem amb una gran varietat de programes que es dediquen a jugar a escacs. Però no hi ha només programes petits, hi ha fins i tot màquines senceres dedicades a calcular i estudiar jugades per tal de guanyar als millors jugadors del món. L’objectiu del meu treball és dur a terme un estudi i implementació d’un d’aquests programes, per això es divideix en dues parts. La part teòrica o de l’estudi, consta d’un estudi dels sistemes d’intel•ligència artificial que es dediquen a jugar a escacs, estudi i cerca d’una funció d’avaluació vàlida i estudi dels algorismes de cerca. La part pràctica del treball es basa en la implementació d’un sistema intel•ligent capaç de jugar a escacs amb certa lògica. Aquesta implementació es porta a terme amb l’ajuda de les llibreries SDL, utilitzant l’algorisme minimax amb poda alfa-beta i codi c++. Com a conclusió del projecte m’agradaria remarcar que l’estudi realitzat m’ha deixat veure que crear un joc d’escacs no era tan fàcil com jo pensava però m’ha aportat la satisfacció d’aplicar tot el que he après durant la carrera i de descobrir moltes altres coses noves.
Resumo:
Parameter estimation still remains a challenge in many important applications. There is a need to develop methods that utilize achievements in modern computational systems with growing capabilities. Owing to this fact different kinds of Evolutionary Algorithms are becoming an especially perspective field of research. The main aim of this thesis is to explore theoretical aspects of a specific type of Evolutionary Algorithms class, the Differential Evolution (DE) method, and implement this algorithm as codes capable to solve a large range of problems. Matlab, a numerical computing environment provided by MathWorks inc., has been utilized for this purpose. Our implementation empirically demonstrates the benefits of a stochastic optimizers with respect to deterministic optimizers in case of stochastic and chaotic problems. Furthermore, the advanced features of Differential Evolution are discussed as well as taken into account in the Matlab realization. Test "toycase" examples are presented in order to show advantages and disadvantages caused by additional aspects involved in extensions of the basic algorithm. Another aim of this paper is to apply the DE approach to the parameter estimation problem of the system exhibiting chaotic behavior, where the well-known Lorenz system with specific set of parameter values is taken as an example. Finally, the DE approach for estimation of chaotic dynamics is compared to the Ensemble prediction and parameter estimation system (EPPES) approach which was recently proposed as a possible solution for similar problems.
Resumo:
Traditionally real estate has been seen as a good diversification tool for a stock portfolio due to the lower return and volatility characteristics of real estate investments. However, the diversification benefits of a multi-asset portfolio depend on how the different asset classes co-move in the short- and long-run. As the asset classes are affected by the same macroeconomic factors, interrelationships limiting the diversification benefits could exist. This master’s thesis aims to identify such dynamic linkages in the Finnish real estate and stock markets. The results are beneficial for portfolio optimization tasks as well as for policy-making. The real estate industry can be divided into direct and securitized markets. In this thesis the direct market is depicted by the Finnish housing market index. The securitized market is proxied by the Finnish all-sectors securitized real estate index and by a European residential Real Estate Investment Trust index. The stock market is depicted by OMX Helsinki Cap index. Several macroeconomic variables are incorporated as well. The methodology of this thesis is based on the Vector Autoregressive (VAR) models. The long-run dynamic linkages are studied with Johansen’s cointegration tests and the short-run interrelationships are examined with Granger-causality tests. In addition, impulse response functions and forecast error variance decomposition analyses are used for robustness checks. The results show that long-run co-movement, or cointegration, did not exist between the housing and stock markets during the sample period. This indicates diversification benefits in the long-run. However, cointegration between the stock and securitized real estate markets was identified. This indicates limited diversification benefits and shows that the listed real estate market in Finland is not matured enough to be considered a separate market from the general stock market. Moreover, while securitized real estate was shown to cointegrate with the housing market in the long-run, the two markets are still too different in their characteristics to be used as substitutes in a multi-asset portfolio. This implies that the capital intensiveness of housing investments cannot be circumvented by investing in securitized real estate.
Resumo:
The incidence of superficial or deep-seated infections due to Candida glabrata has increased markedly, probably because of the low intrinsic susceptibility of this microorganism to azole antifungals and its relatively high propensity to acquire azole resistance. To determine changes in the C. glabrata proteome associated with petite mutations, cytosolic extracts from an azole-resistant petite mutant of C. glabrata induced by exposure to ethidium bromide, and from its azole-susceptible parent isolate were compared by two-dimensional polyacrylamide gel electrophoresis. Proteins of interest were identified by peptide mass fingerprinting or sequence tagging using a matrix-assisted laser desorption/ionization tandem time-of-flight mass spectrometer. Tryptic peptides from a total of 160 Coomassie-positive spots were analyzed for each strain. Sixty-five different proteins were identified in the cytosolic extracts of the parent strain and 58 in the petite mutant. Among the proteins identified, 10 were higher in the mutant strain, whereas 23 were lower compared to the parent strain. The results revealed a significant decrease in the enzymes associated with the metabolic rate of mutant cells such as aconitase, transaldolase, and pyruvate kinase, and changes in the levels of specific heat shock proteins. Moreover, transketolase, aconitase and catalase activity measurements decreased significantly in the ethidium bromide-induced petite mutant. These data may be useful for designing experiments to obtain a better understanding of the nuclear response to impairment of mitochondrial function associated with this mutation in C. glabrata.
Resumo:
Our objective is to develop a diffusion Monte Carlo (DMC) algorithm to estimate the exact expectation values, ($o|^|^o), of multiplicative operators, such as polarizabilities and high-order hyperpolarizabilities, for isolated atoms and molecules. The existing forward-walking pure diffusion Monte Carlo (FW-PDMC) algorithm which attempts this has a serious bias. On the other hand, the DMC algorithm with minimal stochastic reconfiguration provides unbiased estimates of the energies, but the expectation values ($o|^|^) are contaminated by ^, an user specified, approximate wave function, when A does not commute with the Hamiltonian. We modified the latter algorithm to obtain the exact expectation values for these operators, while at the same time eliminating the bias. To compare the efficiency of FW-PDMC and the modified DMC algorithms we calculated simple properties of the H atom, such as various functions of coordinates and polarizabilities. Using three non-exact wave functions, one of moderate quality and the others very crude, in each case the results are within statistical error of the exact values.
Resumo:
Controlled choice over public schools attempts giving options to parents while maintaining diversity, often enforced by setting feasibility constraints with hard upper and lower bounds for each student type. We demonstrate that there might not exist assignments that satisfy standard fairness and non-wastefulness properties; whereas constrained non-wasteful assignments which are fair for same type students always exist. We introduce a "controlled" version of the deferred acceptance algorithm with an improvement stage (CDAAI) that finds a Pareto optimal assignment among such assignments. To achieve fair (across all types) and non-wasteful assignments, we propose the control constraints to be interpreted as soft bounds-flexible limits that regulate school priorities. In this setting, a modified version of the deferred acceptance algorithm (DAASB) finds an assignment that is Pareto optimal among fair assignments while eliciting true preferences. CDAAI and DAASB provide two alternative practical solutions depending on the interpretation of the control constraints. JEL C78, D61, D78, I20.
Resumo:
We study markets with indivisible goods where monetary compensations are not possible. Each individual is endowed with an object and a preference relation over all objects. When preferences are strict, Gale's top trading cycle algorithm finds the unique core allocation. When preferences are not necessarily strict, we use an exogenous profile of tie-breakers to resolve any ties in individuals' preferences and apply Gale's top trading cycle algorithm for the resulting profile of strict preferences. We provide a foundation of these simple extensions of Gale's top trading cycle algorithm from strict preferences to weak preferences. We show that Gale's top trading cycle algorithm with fixed tie-breaking is characterized by individual rationality, strategy-proofness, weak efficiency, non-bossiness, and consistency. Our result supports the common practice in applications to break ties in weak preferences using some fixed exogenous criteria and then to use a 'good and simple' rule for the resulting strict preferences. This reinforces the market-based approach even in the presence of indifferences because always competitive allocations are chosen.
Resumo:
La thèse comporte trois essais en microéconomie appliquée. En utilisant des modèles d’apprentissage (learning) et d’externalité de réseau, elle étudie le comportement des agents économiques dans différentes situations. Le premier essai de la thèse se penche sur la question de l’utilisation des ressources naturelles en situation d’incertitude et d’apprentissage (learning). Plusieurs auteurs ont abordé le sujet, mais ici, nous étudions un modèle d’apprentissage dans lequel les agents qui consomment la ressource ne formulent pas les mêmes croyances a priori. Le deuxième essai aborde le problème générique auquel fait face, par exemple, un fonds de recherche désirant choisir les meilleurs parmi plusieurs chercheurs de différentes générations et de différentes expériences. Le troisième essai étudie un modèle particulier d’organisation d’entreprise dénommé le marketing multiniveau (multi-level marketing). Le premier chapitre est intitulé "Renewable Resource Consumption in a Learning Environment with Heterogeneous beliefs". Nous y avons utilisé un modèle d’apprentissage avec croyances hétérogènes pour étudier l’exploitation d’une ressource naturelle en situation d’incertitude. Il faut distinguer ici deux types d’apprentissage : le adaptive learning et le learning proprement dit. Ces deux termes ont été empruntés à Koulovatianos et al (2009). Nous avons montré que, en comparaison avec le adaptive learning, le learning a un impact négatif sur la consommation totale par tous les exploitants de la ressource. Mais individuellement certains exploitants peuvent consommer plus la ressource en learning qu’en adaptive learning. En effet, en learning, les consommateurs font face à deux types d’incitations à ne pas consommer la ressource (et donc à investir) : l’incitation propre qui a toujours un effet négatif sur la consommation de la ressource et l’incitation hétérogène dont l’effet peut être positif ou négatif. L’effet global du learning sur la consommation individuelle dépend donc du signe et de l’ampleur de l’incitation hétérogène. Par ailleurs, en utilisant les variations absolues et relatives de la consommation suite à un changement des croyances, il ressort que les exploitants ont tendance à converger vers une décision commune. Le second chapitre est intitulé "A Perpetual Search for Talent across Overlapping Generations". Avec un modèle dynamique à générations imbriquées, nous avons étudié iv comment un Fonds de recherche devra procéder pour sélectionner les meilleurs chercheurs à financer. Les chercheurs n’ont pas la même "ancienneté" dans l’activité de recherche. Pour une décision optimale, le Fonds de recherche doit se baser à la fois sur l’ancienneté et les travaux passés des chercheurs ayant soumis une demande de subvention de recherche. Il doit être plus favorable aux jeunes chercheurs quant aux exigences à satisfaire pour être financé. Ce travail est également une contribution à l’analyse des Bandit Problems. Ici, au lieu de tenter de calculer un indice, nous proposons de classer et d’éliminer progressivement les chercheurs en les comparant deux à deux. Le troisième chapitre est intitulé "Paradox about the Multi-Level Marketing (MLM)". Depuis quelques décennies, on rencontre de plus en plus une forme particulière d’entreprises dans lesquelles le produit est commercialisé par le biais de distributeurs. Chaque distributeur peut vendre le produit et/ou recruter d’autres distributeurs pour l’entreprise. Il réalise des profits sur ses propres ventes et reçoit aussi des commissions sur la vente des distributeurs qu’il aura recrutés. Il s’agit du marketing multi-niveau (multi-level marketing, MLM). La structure de ces types d’entreprise est souvent qualifiée par certaines critiques de système pyramidal, d’escroquerie et donc insoutenable. Mais les promoteurs des marketing multi-niveau rejettent ces allégations en avançant que le but des MLMs est de vendre et non de recruter. Les gains et les règles de jeu sont tels que les distributeurs ont plus incitation à vendre le produit qu’à recruter. Toutefois, si cette argumentation des promoteurs de MLMs est valide, un paradoxe apparaît. Pourquoi un distributeur qui désire vraiment vendre le produit et réaliser un gain recruterait-il d’autres individus qui viendront opérer sur le même marché que lui? Comment comprendre le fait qu’un agent puisse recruter des personnes qui pourraient devenir ses concurrents, alors qu’il est déjà établi que tout entrepreneur évite et même combat la concurrence. C’est à ce type de question que s’intéresse ce chapitre. Pour expliquer ce paradoxe, nous avons utilisé la structure intrinsèque des organisations MLM. En réalité, pour être capable de bien vendre, le distributeur devra recruter. Les commissions perçues avec le recrutement donnent un pouvoir de vente en ce sens qu’elles permettent au recruteur d’être capable de proposer un prix compétitif pour le produit qu’il désire vendre. Par ailleurs, les MLMs ont une structure semblable à celle des multi-sided markets au sens de Rochet et Tirole (2003, 2006) et Weyl (2010). Le recrutement a un effet externe sur la vente et la vente a un effet externe sur le recrutement, et tout cela est géré par le promoteur de l’organisation. Ainsi, si le promoteur ne tient pas compte de ces externalités dans la fixation des différentes commissions, les agents peuvent se tourner plus ou moins vers le recrutement.
Resumo:
A fast simulated annealing algorithm is developed for automatic object recognition. The normalized correlation coefficient is used as a measure of the match between a hypothesized object and an image. Templates are generated on-line during the search by transforming model images. Simulated annealing reduces the search time by orders of magnitude with respect to an exhaustive search. The algorithm is applied to the problem of how landmarks, for example, traffic signs, can be recognized by an autonomous vehicle or a navigating robot. The algorithm works well in noisy, real-world images of complicated scenes for model images with high information content.
Resumo:
In this paper we present error analysis for a Monte Carlo algorithm for evaluating bilinear forms of matrix powers. An almost Optimal Monte Carlo (MAO) algorithm for solving this problem is formulated. Results for the structure of the probability error are presented and the construction of robust and interpolation Monte Carlo algorithms are discussed. Results are presented comparing the performance of the Monte Carlo algorithm with that of a corresponding deterministic algorithm. The two algorithms are tested on a well balanced matrix and then the effects of perturbing this matrix, by small and large amounts, is studied.