923 resultados para Search Engine Optimization Methods
Resumo:
Acknowledgments Financial Support: HERU and HSRU receive a core grant from the Chief Scientist’s Office of the Scottish Government Health and Social Care Directorates, and the Centre for Clinical epidemiology & Evaluation is funded by Vancouver Coastal Health Authority. The model used for the illustrative case study in this paper was developed as part of a NHS Technology Assessment Review, funded by the National Institute for Health Research (NIHR) Health Technology Assessment Program (project number 09/146/01). The views and opinions expressed in this paper are those of the authors and do not necessarily reflect those of the Scottish Government, NHS, Vancouver Coastal Health, NIHR HTA Program or the Department of Health. The authors wish to thank Kathleen Boyd and members of the audience at the UK Health Economists Study Group, for comments received on an earlier version of this paper. We also wish to thank Cynthia Fraser (University of Aberdeen) for literature searches undertaken to inform the manuscript, and Mohsen Sadatsafavi (University of British Columbia) for comments on an earlier draft
Resumo:
Systemic lupus erythematosus (SLE) is an autoimmune multisystem inflammatory disease characterized by the production of pathogenic autoantibodies. Previous genetic studies have suggested associations with HLA Class II alleles, complement gene deficiencies, and Fc receptor polymorphisms; however, it is likely that other genes contribute to SLE susceptibility and pathogenesis. Here, we report the results of a genome-wide microsatellite marker screen in 105 SLE sib-pair families. By using multipoint nonparametric methods, the strongest evidence for linkage was found near the HLA locus (6p11-p21) [D6S257, logarithm of odds (lod) = 3.90, P = 0.000011] and at three additional regions: 16q13 (D16S415, lod = 3.64, P = 0.000022), 14q21–23 (D14S276, lod = 2.81, P = 0.00016), and 20p12 (D20S186, lod = 2.62, P = 0.00025). Another nine regions (1p36, 1p13, 1q42, 2p15, 2q21–33, 3cent-q11, 4q28, 11p15, and 15q26) were identified with lod scores ≥1.00. These data support the hypothesis that multiple genes, including one in the HLA region, influence susceptibility to human SLE.
Resumo:
To initiate homologous recombination, sequence similarity between two DNA molecules must be searched for and homology recognized. How the search for and recognition of homology occurs remains unproven. We have examined the influences of DNA topology and the polarity of RecA–single-stranded (ss)DNA filaments on the formation of synaptic complexes promoted by RecA. Using two complementary methods and various ssDNA and duplex DNA molecules as substrates, we demonstrate that topological constraints on a small circular RecA–ssDNA filament prevent it from interwinding with its duplex DNA target at the homologous region. We were unable to detect homologous pairing between a circular RecA–ssDNA filament and its relaxed or supercoiled circular duplex DNA targets. However, the formation of synaptic complexes between an invading linear RecA–ssDNA filament and covalently closed circular duplex DNAs is promoted by supercoiling of the duplex DNA. The results imply that a triplex structure formed by non-Watson–Crick hydrogen bonding is unlikely to be an intermediate in homology searching promoted by RecA. Rather, a model in which RecA-mediated homology searching requires unwinding of the duplex DNA coupled with local strand exchange is the likely mechanism. Furthermore, we show that polarity of the invading RecA–ssDNA does not affect its ability to pair and interwind with its circular target duplex DNA.
Resumo:
Recent improvements of a hierarchical ab initio or de novo approach for predicting both α and β structures of proteins are described. The united-residue energy function used in this procedure includes multibody interactions from a cumulant expansion of the free energy of polypeptide chains, with their relative weights determined by Z-score optimization. The critical initial stage of the hierarchical procedure involves a search of conformational space by the conformational space annealing (CSA) method, followed by optimization of an all-atom model. The procedure was assessed in a recent blind test of protein structure prediction (CASP4). The resulting lowest-energy structures of the target proteins (ranging in size from 70 to 244 residues) agreed with the experimental structures in many respects. The entire experimental structure of a cyclic α-helical protein of 70 residues was predicted to within 4.3 Å α-carbon (Cα) rms deviation (rmsd) whereas, for other α-helical proteins, fragments of roughly 60 residues were predicted to within 6.0 Å Cα rmsd. Whereas β structures can now be predicted with the new procedure, the success rate for α/β- and β-proteins is lower than that for α-proteins at present. For the β portions of α/β structures, the Cα rmsd's are less than 6.0 Å for contiguous fragments of 30–40 residues; for one target, three fragments (of length 10, 23, and 28 residues, respectively) formed a compact part of the tertiary structure with a Cα rmsd less than 6.0 Å. Overall, these results constitute an important step toward the ab initio prediction of protein structure solely from the amino acid sequence.
Resumo:
Heuristics for stochastic and dynamic vehicle routing problems are often kept relatively simple, in part due to the high computational burden resulting from having to consider stochastic information in some form. In this work, three existing heuristics are extended by three different local search variations: a first improvement descent using stochastic information, a tabu search using stochastic information when updating the incumbent solution, and a tabu search using stochastic information when selecting moves based on a list of moves determined through a proxy evaluation. In particular, the three local search variations are designed to utilize stochastic information in the form of sampled scenarios. The results indicate that adding local search using stochastic information to the existing heuristics can further reduce operating costs for shipping companies by 0.5–2 %. While the existing heuristics could produce structurally different solutions even when using similar stochastic information in the search, the appended local search methods seem able to make the final solutions more similar in structure.
Resumo:
O empacotamento irregular de fita é um grupo de problemas na área de corte e empacotamento, cuja aplicação é observada nas indústrias têxtil, moveleira e construção naval. O problema consiste em definir uma configuração de itens irregulares de modo que o comprimento do contêiner retangular que contém o leiaute seja minimizado. A solução deve ser válida, isto é, não deve haver sobreposição entre os itens, que não devem extrapolar as paredes do contêiner. Devido a aspectos práticos, são admitidas até quatro orientações para o item. O volume de material desperdiçado está diretamente relacionado à qualidade do leiaute obtido e, por este motivo, uma solução eficiente pressupõe uma vantagem econômica e resulta em um menor impacto ambiental. O objetivo deste trabalho consiste na geração automática de leiautes de modo a obter níveis de compactação e tempo de processamento compatíveis com outras soluções na literatura. A fim de atingir este objetivo, são realizadas duas propostas de solução. A primeira consiste no posicionamento sequencial dos itens de modo a maximizar a ocorrência de posições de encaixe, que estão relacionadas à restrição de movimento de um item no leiaute. Em linhas gerais, várias sequências de posicionamentos são exploradas com o objetivo de encontrar a solução mais compacta. Na segunda abordagem, que consiste na principal proposta deste trabalho, métodos rasterizados são aplicados para movimentar itens de acordo com uma grade de posicionamento, admitindo sobreposição. O método é baseado na estratégia de minimização de sobreposição, cujo objetivo é a eliminação da sobreposição em um contêiner fechado. Ambos os algoritmos foram testados utilizando o mesmo conjunto de problemas de referência da literatura. Foi verificado que a primeira estratégia não foi capaz de obter soluções satisfatórias, apesar de fornecer informações importantes sobre as propriedades das posições de encaixe. Por outro lado, a segunda abordagem obteve resultados competitivos. O desempenho do algoritmo também foi compatível com outras soluções, inclusive em casos nos quais o volume de dados era alto. Ademais, como trabalho futuro, o algoritmo pode ser estendido de modo a possibilitar a entrada de itens de geometria genérica, o que pode se tornar o grande diferencial da proposta.
Resumo:
Observations of cosmic rays arrival directions made with the Pierre Auger Observatory have previously provided evidence of anisotropy at the 99% CL using the correlation of ultra high energy cosmic rays (UHECRs) with objects drawn from the Veron-Cetty Veron catalog. In this paper we report on the use of three catalog independent methods to search for anisotropy. The 2pt-L, 2pt+ and 3pt methods, each giving a different measure of self-clustering in arrival directions, were tested on mock cosmic ray data sets to study the impacts of sample size and magnetic smearing on their results, accounting for both angular and energy resolutions. If the sources of UHECRs follow the same large scale structure as ordinary galaxies in the local Universe and if UHECRs are deflected no more than a few degrees, a study of mock maps suggests that these three method can efficiently respond to the resulting anisotropy with a P-value = 1.0% or smaller with data sets as few as 100 events. using data taken from January 1, 2004 to July 31, 2010 we examined the 20, 30, ... , 110 highest energy events with a corresponding minimum energy threshold of about 49.3 EeV. The minimum P-values found were 13.5% using the 2pt-L method, 1.0% using the 2pt+ method and 1.1% using the 3pt method for the highest 100 energy events. In view of the multiple (correlated) scans performed on the data set, these catalog-independent methods do not yield strong evidence of anisotropy in the highest energy cosmic rays.
Resumo:
The increasing economic competition drives the industry to implement tools that improve their processes efficiencies. The process automation is one of these tools, and the Real Time Optimization (RTO) is an automation methodology that considers economic aspects to update the process control in accordance with market prices and disturbances. Basically, RTO uses a steady-state phenomenological model to predict the process behavior, and then, optimizes an economic objective function subject to this model. Although largely implemented in industry, there is not a general agreement about the benefits of implementing RTO due to some limitations discussed in the present work: structural plant/model mismatch, identifiability issues and low frequency of set points update. Some alternative RTO approaches have been proposed in literature to handle the problem of structural plant/model mismatch. However, there is not a sensible comparison evaluating the scope and limitations of these RTO approaches under different aspects. For this reason, the classical two-step method is compared to more recently derivative-based methods (Modifier Adaptation, Integrated System Optimization and Parameter estimation, and Sufficient Conditions of Feasibility and Optimality) using a Monte Carlo methodology. The results of this comparison show that the classical RTO method is consistent, providing a model flexible enough to represent the process topology, a parameter estimation method appropriate to handle measurement noise characteristics and a method to improve the sample information quality. At each iteration, the RTO methodology updates some key parameter of the model, where it is possible to observe identifiability issues caused by lack of measurements and measurement noise, resulting in bad prediction ability. Therefore, four different parameter estimation approaches (Rotational Discrimination, Automatic Selection and Parameter estimation, Reparametrization via Differential Geometry and classical nonlinear Least Square) are evaluated with respect to their prediction accuracy, robustness and speed. The results show that the Rotational Discrimination method is the most suitable to be implemented in a RTO framework, since it requires less a priori information, it is simple to be implemented and avoid the overfitting caused by the Least Square method. The third RTO drawback discussed in the present thesis is the low frequency of set points update, this problem increases the period in which the process operates at suboptimum conditions. An alternative to handle this problem is proposed in this thesis, by integrating the classic RTO and Self-Optimizing control (SOC) using a new Model Predictive Control strategy. The new approach demonstrates that it is possible to reduce the problem of low frequency of set points updates, improving the economic performance. Finally, the practical aspects of the RTO implementation are carried out in an industrial case study, a Vapor Recompression Distillation (VRD) process located in Paulínea refinery from Petrobras. The conclusions of this study suggest that the model parameters are successfully estimated by the Rotational Discrimination method; the RTO is able to improve the process profit in about 3%, equivalent to 2 million dollars per year; and the integration of SOC and RTO may be an interesting control alternative for the VRD process.