924 resultados para Computational methods
Resumo:
Accurate estimates for the fall speed of natural hydrometeors are vital if their evolution in clouds is to be understood quantitatively. In this study, laboratory measurements of the terminal velocity vt for a variety of ice particle models settling in viscous fluids, along with wind-tunnel and field measurements of ice particles settling in air, have been analyzed and compared to common methods of computing vt from the literature. It is observed that while these methods work well for a number of particle types, they fail for particles with open geometries, specifically those particles for which the area ratio Ar is small (Ar is defined as the area of the particle projected normal to the flow divided by the area of a circumscribing disc). In particular, the fall speeds of stellar and dendritic crystals, needles, open bullet rosettes, and low-density aggregates are all overestimated. These particle types are important in many cloud types: aggregates in particular often dominate snow precipitation at the ground and vertically pointing Doppler radar measurements. Based on the laboratory data, a simple modification to previous computational methods is proposed, based on the area ratio. This new method collapses the available drag data onto an approximately universal curve, and the resulting errors in the computed fall speeds relative to the tank data are less than 25% in all cases. Comparison with the (much more scattered) measurements of ice particles falling in air show strong support for this new method, with the area ratio bias apparently eliminated.
Resumo:
The accurate prediction of the biochemical function of a protein is becoming increasingly important, given the unprecedented growth of both structural and sequence databanks. Consequently, computational methods are required to analyse such data in an automated manner to ensure genomes are annotated accurately. Protein structure prediction methods, for example, are capable of generating approximate structural models on a genome-wide scale. However, the detection of functionally important regions in such crude models, as well as structural genomics targets, remains an extremely important problem. The method described in the current study, MetSite, represents a fully automatic approach for the detection of metal-binding residue clusters applicable to protein models of moderate quality. The method involves using sequence profile information in combination with approximate structural data. Several neural network classifiers are shown to be able to distinguish metal sites from non-sites with a mean accuracy of 94.5%. The method was demonstrated to identify metal-binding sites correctly in LiveBench targets where no obvious metal-binding sequence motifs were detectable using InterPro. Accurate detection of metal sites was shown to be feasible for low-resolution predicted structures generated using mGenTHREADER where no side-chain information was available. High-scoring predictions were observed for a recently solved hypothetical protein from Haemophilus influenzae, indicating a putative metal-binding site.
Resumo:
With the prospect of exascale computing, computational methods requiring only local data become especially attractive. Consequently, the typical domain decomposition of atmospheric models means horizontally-explicit vertically-implicit (HEVI) time-stepping schemes warrant further attention. In this analysis, Runge-Kutta implicit-explicit schemes from the literature are analysed for their stability and accuracy using a von Neumann stability analysis of two linear systems. Attention is paid to the numerical phase to indicate the behaviour of phase and group velocities. Where the analysis is tractable, analytically derived expressions are considered. For more complicated cases, amplification factors have been numerically generated and the associated amplitudes and phase diagnosed. Analysis of a system describing acoustic waves has necessitated attributing the three resultant eigenvalues to the three physical modes of the system. To do so, a series of algorithms has been devised to track the eigenvalues across the frequency space. The result enables analysis of whether the schemes exactly preserve the non-divergent mode; and whether there is evidence of spurious reversal in the direction of group velocities or asymmetry in the damping for the pair of acoustic modes. Frequency ranges that span next-generation high-resolution weather models to coarse-resolution climate models are considered; and a comparison is made of errors accumulated from multiple stability-constrained shorter time-steps from the HEVI scheme with a single integration from a fully implicit scheme over the same time interval. Two schemes, “Trap2(2,3,2)” and “UJ3(1,3,2)”, both already used in atmospheric models, are identified as offering consistently good stability and representation of phase across all the analyses. Furthermore, according to a simple measure of computational cost, “Trap2(2,3,2)” is the least expensive.
Resumo:
Variational data assimilation is commonly used in environmental forecasting to estimate the current state of the system from a model forecast and observational data. The assimilation problem can be written simply in the form of a nonlinear least squares optimization problem. However the practical solution of the problem in large systems requires many careful choices to be made in the implementation. In this article we present the theory of variational data assimilation and then discuss in detail how it is implemented in practice. Current solutions and open questions are discussed.
Resumo:
This paper, the second in a series of three papers concerned with the statistical aspects of interim analyses in clinical trials, is concerned with stopping rules in phase II clinical trials. Phase II trials are generally small-scale studies, and may include one or more experimental treatments with or without a control. A common feature is that the results primarily determine the course of further clinical evaluation of a treatment rather than providing definitive evidence of treatment efficacy. This means that there is more flexibility available in the design and analysis of such studies than in phase III trials. This has led to a range of different approaches being taken to the statistical design of stopping rules for such trials. This paper briefly describes and compares the different approaches. In most cases the stopping rules can be described and implemented easily without knowledge of the detailed statistical and computational methods used to obtain the rules.
Resumo:
To evaluate the checkerboard DNA-DNA hybridization method for detection and quantitation of bacteria from the internal parts of dental implants and to compare bacterial leakage from implants connected either to cast or to pre-machined abutments. Nine plastic abutments cast in a Ni-Cr alloy and nine pre-machined Co-Cr alloy abutments with plastic sleeves cast in Ni-Cr were connected to Branemark-compatible implants. A group of nine implants was used as control. The implants were inoculated with 3 mu l of a solution containing 10(8) cells/ml of Streptococcus sobrinus. Bacterial samples were immediately collected from the control implants while assemblies were completely immersed in 5 ml of sterile Tripty Soy Broth (TSB) medium. After 14 days of anaerobic incubation, occurrence of leakage at the implant-abutment interface was evaluated by assessing contamination of the TSB medium. Internal contamination of the implants was evaluated with the checkerboard DNA-DNA hybridization method. DNA-DNA hybridization was sensitive enough to detect and quantify the microorganism from the internal parts of the implants. No differences in leakage and in internal contamination were found between cast and pre-machined abutments. Bacterial scores in the control group were significantly higher than in the other groups (P < 0.05). Bacterial leakage through the implant-abutment interface does not significantly differ when cast or pre-machined abutments are used. The checkerboard DNA-DNA hybridization technique is suitable for the evaluation of the internal contamination of dental implants although further studies are necessary to validate the use of computational methods for the improvement of the test accuracy. To cite this article:do Nascimento C, Barbosa RES, Issa JPM, Watanabe E, Ito IY, Albuquerque Junior RF. Use of checkerboard DNA-DNA hybridization to evaluate the internal contamination of dental implants and comparison of bacterial leakage with cast or pre-machined abutments.Clin. Oral Impl. Res. 20, 2009; 571-577.doi: 10.1111/j.1600-0501.2008.01663.x.
Resumo:
We investigate the possibility of interpreting the degeneracy of the genetic code, i.e., the feature that different codons (base triplets) of DNA are transcribed into the same amino acid, as the result of a symmetry breaking process, in the context of finite groups. In the first part of this paper, we give the complete list of all codon representations (64-dimensional irreducible representations) of simple finite groups and their satellites (central extensions and extensions by outer automorphisms). In the second part, we analyze the branching rules for the codon representations found in the first part by computational methods, using a software package for computational group theory. The final result is a complete classification of the possible schemes, based on finite simple groups, that reproduce the multiplet structure of the genetic code. (C) 2010 Elsevier Ltd. All rights reserved.
Resumo:
We simplify the results of Bremner and Hentzel [J. Algebra 231 (2000) 387-405] on polynomial identities of degree 9 in two variables satisfied by the ternary cyclic sum [a, b, c] abc + bca + cab in every totally associative ternary algebra. We also obtain new identities of degree 9 in three variables which do not follow from the identities in two variables. Our results depend on (i) the LLL algorithm for lattice basis reduction, and (ii) linearization operators in the group algebra of the symmetric group which permit efficient computation of the representation matrices for a non-linear identity. Our computational methods can be applied to polynomial identities for other algebraic structures.
Resumo:
Vegetation growing on railway trackbeds and embankments present potential problems. The presence of vegetation threatens the safety of personnel inspecting the railway infrastructure. In addition vegetation growth clogs the ballast and results in inadequate track drainage which in turn could lead to the collapse of the railway embankment. Assessing vegetation within the realm of railway maintenance is mainly carried out manually by making visual inspections along the track. This is done either on-site or by watching videos recorded by maintenance vehicles mainly operated by the national railway administrative body. A need for the automated detection and characterisation of vegetation on railways (a subset of vegetation control/management) has been identified in collaboration with local railway maintenance subcontractors and Trafikverket, the Swedish Transport Administration (STA). The latter is responsible for long-term planning of the transport system for all types of traffic, as well as for the building, operation and maintenance of public roads and railways. The purpose of this research project was to investigate how vegetation can be measured and quantified by human raters and how machine vision can automate the same process. Data were acquired at railway trackbeds and embankments during field measurement experiments. All field data (such as images) in this thesis work was acquired on operational, lightly trafficked railway tracks, mostly trafficked by goods trains. Data were also generated by letting (human) raters conduct visual estimates of plant cover and/or count the number of plants, either on-site or in-house by making visual estimates of the images acquired from the field experiments. Later, the degree of reliability of(human) raters’ visual estimates were investigated and compared against machine vision algorithms. The overall results of the investigations involving human raters showed inconsistency in their estimates, and are therefore unreliable. As a result of the exploration of machine vision, computational methods and algorithms enabling automatic detection and characterisation of vegetation along railways were developed. The results achieved in the current work have shown that the use of image data for detecting vegetation is indeed possible and that such results could form the base for decisions regarding vegetation control. The performance of the machine vision algorithm which quantifies the vegetation cover was able to process 98% of the im-age data. Investigations of classifying plants from images were conducted in in order to recognise the specie. The classification rate accuracy was 95%.Objective measurements such as the ones proposed in thesis offers easy access to the measurements to all the involved parties and makes the subcontracting process easier i.e., both the subcontractors and the national railway administration are given the same reference framework concerning vegetation before signing a contract, which can then be crosschecked post maintenance.A very important issue which comes with an increasing ability to recognise species is the maintenance of biological diversity. Biological diversity along the trackbeds and embankments can be mapped, and maintained, through better and robust monitoring procedures. Continuously monitoring the state of vegetation along railways is highly recommended in order to identify a need for maintenance actions, and in addition to keep track of biodiversity. The computational methods or algorithms developed form the foundation of an automatic inspection system capable of objectively supporting manual inspections, or replacing manual inspections.
Resumo:
Alavancagem em hedge funds tem preocupado investidores e estudiosos nos últimos anos. Exemplos recentes de estratégias desse tipo se mostraram vantajosos em períodos de pouca incerteza na economia, porém desastrosos em épocas de crise. No campo das finanças quantitativas, tem-se procurado encontrar o nível de alavancagem que otimize o retorno de um investimento dado o risco que se corre. Na literatura, os estudos têm se mostrado mais qualitativos do que quantitativos e pouco se tem usado de métodos computacionais para encontrar uma solução. Uma forma de avaliar se alguma estratégia de alavancagem aufere ganhos superiores do que outra é definir uma função objetivo que relacione risco e retorno para cada estratégia, encontrar as restrições do problema e resolvê-lo numericamente por meio de simulações de Monte Carlo. A presente dissertação adotou esta abordagem para tratar o investimento em uma estratégia long-short em um fundo de investimento de ações em diferentes cenários: diferentes formas de alavancagem, dinâmicas de preço das ações e níveis de correlação entre esses preços. Foram feitas simulações da dinâmica do capital investido em função das mudanças dos preços das ações ao longo do tempo. Considerou-se alguns critérios de garantia de crédito, assim como a possibilidade de compra e venda de ações durante o período de investimento e o perfil de risco do investidor. Finalmente, estudou-se a distribuição do retorno do investimento para diferentes níveis de alavancagem e foi possível quantificar qual desses níveis é mais vantajoso para a estratégia de investimento dadas as restrições de risco.
Resumo:
Trabalho apresentado no Congresso Nacional de Matemática Aplicada à Indústria, 18 a 21 de novembro de 2014, Caldas Novas - Goiás