954 resultados para computer algorithm
Resumo:
The extrinsic tensile strength of glass can be determined explicitly if the characteristics of the critical surface flaw are known, or stochastically if the critical flaw characteristics are unknown. This paper makes contributions to both these approaches. Firstly it presents a unified model for determining the strength of glass explicitly, by accounting for both the inert strength limit and the sub-critical crack growth threshold. Secondly, it describes and illustrates the use of a numerical algorithm, based on the stochastic approach, that computes the characteristic tensile strength of float glass by piecewise summation of the surface stresses. The experimental validation and sensitivity analysis reported in this paper show that the proposed computer algorithm provides an accurate and efficient means of determining the characteristic strength of float glass. The algorithm is particularly useful for annealed and thermally treated float glass used in the construction industry. © 2012 Elsevier Ltd.
Resumo:
"August 1982."
Dating the Siple Dome (Antarctica) Ice Core By Manual and Computer Interpretation of Annual Layering
Resumo:
The Holocene portion of the Siple Dome (Antarctica) ice core was dated by interpreting the electrical, visual and chemical properties of the core. The data were interpreted manually and with a computer algorithm. The algorithm interpretation was adjusted to be consistent with atmospheric methane stratigraphic ties to the GISP2 (Greenland Ice Sheet Project 2) ice core, (BE)-B-10 stratigraphic ties to the dendrochronology C-14 record and the dated volcanic stratigraphy. The algorithm interpretation is more consistent and better quantified than the tedious and subjective manual interpretation.
Resumo:
Fast calculation of quantities such as in-cylinder volume and indicated power is important in internal combustion engine research. Multiple channels of data including crank angle and pressure were collected for this purpose using a fully instrumented diesel engine research facility. Currently, existing methods use software to post-process the data, first calculating volume from crank angle, then calculating the indicated work and indicated power from the area enclosed by the pressure-volume indicator diagram. Instead, this work investigates the feasibility of achieving real-time calculation of volume and power via hardware implementation on Field Programmable Gate Arrays (FPGAs). Alternative hardware implementations were investigated using lookup tables, Taylor series methods or the CORDIC (CoOrdinate Rotation DIgital Computer) algorithm to compute the trigonometric operations in the crank angle to volume calculation, and the CORDIC algorithm was found to use the least amount of resources. Simulation of the hardware based implementation showed that the error in the volume and indicated power is less than 0.1%.
Resumo:
INTRODUCTION There is a large range in the reported prevalence of end plate lesions (EPLs), sometimes referred to as Schmorl's nodes in the general population (3.8-76%). One possible reason for this large range is the differences in definitions used by authors. Previous research has suggested that EPLs may potentially be a primary disturbance of growth plates that leads to the onset of scoliosis. The aim of this study was to develop a technique to measure the size, prevalence and location of EPLs on Computed Tomography (CT) images of scoliosis patients in a consistent manner. METHODS A detection algorithm was developed and applied to measure EPLs for five adolescent females with idiopathic scoliosis (average age 15.1 years, average major Cobb 60°). In this algorithm, the EPL definition was based on the lesion depth, the distance from the edge of the vertebral body and the gradient of the lesion edge. Existing low-dose, CT scans of the patients' spines were segmented semi-automatically to extract 3D vertebral endplate morphology. Manual sectioning of any attachments between posterior elements of adjacent vertebrae and, if necessary, endplates was carried out before the automatic algorithm was used to determine the presence and position of EPLs. RESULTS EPLs were identified in 15 of the 170 (8.8%) endplates analysed with an average depth of 3.1mm. 73% of the EPLs were seen in the lumbar spines (11/15). A sensitivity study demonstrated that the algorithm was most sensitive to changes in the minimum gradient required at the lesion edge. CONCLUSION An imaging analysis technique for consistent measurement of the prevalence, location and size of EPLs on CT images has been developed. Although the technique was tested on scoliosis patients, it can be used to analyse other populations without observer errors in EPL definitions.
Resumo:
Colorectal cancer (CRC) is one of the most frequent malignancies in Western countries. Inherited factors have been suggested to be involved in 35% of CRCs. The hereditary CRC syndromes explain only ~6% of all CRCs, indicating that a large proportion of the inherited susceptibility is still unexplained. Much of the remaining genetic predisposition for CRC is probably due to undiscovered low-penetrance variations. This study was conducted to identify germline and somatic changes that contribute to CRC predisposition and tumorigenesis. MLH1 and MSH2, that underlie Hereditary non-polyposis colorectal cancer (HNPCC) are considered to be tumor suppressor genes; the first hit is inherited in the germline and somatic inactivation of the wild type allele is required for tumor initiation. In a recent study, frequent loss of the mutant allele in HNPCC tumors was detected and a new model, arguing against the two-hit hypothesis, was proposed for somatic HNPCC tumorigenesis. We tested this hypothesis by conducting LOH analysis on 25 colorectal HNPCC tumors with a known germline mutation in the MLH1 or MSH2 genes. LOH was detected in 56% of the tumors. All the losses targeted the wild type allele supporting the classical two-hit model for HNPCC tumorigenesis. The variants 3020insC, R702W and G908R in NOD2 predispose to Crohn s disease. Contribution of NOD2 to CRC predisposition has been examined in several case-control series, with conflicting results. We have previously shown that 3020insC does not predispose to CRC in Finnish CRC patients. To expand our previous study the variants R702W and G908R were genotyped in a population-based series of 1042 Finnish CRC patients and 508 healthy controls. Association analyses did not show significant evidence for association of the variants with CRC. Single nucleotide polymorphism (SNP) rs6983267 at chromosome 8q24 was the first CRC susceptibility variant identified through genome-wide association studies. To characterize the role of rs6983267 in CRC predisposition in the Finnish population, we genotyped the SNP in the case-control material of 1042 cases and 1012 controls and showed that G allele of rs6983267 is associated with the increased risk of CRC (OR 1.22; P=0.0018). Examination of allelic imbalance in the tumors heterozygous for rs6983267 revealed that copy number increase affected 22% of the tumors and interestingly, it favored the G allele. By utilizing a computer algorithm, Enhancer Element Locator (EEL), an evolutionary conserved regulatory motif containing rs6983267 was identified. The SNP affected the binding site of TCF4, a transcription factor that mediates Wnt signaling in cells, and has proven to be crucial in colorectal neoplasia. The preferential binding of TCF4 to the risk allele G was showed in vitro and in vivo. The element drove lacZ marker gene expression in mouse embryos in a pattern that is consistent with genes regulated by the Wnt signaling pathway. These results suggest that rs6983267 at 8q24 exerts its effect in CRC predisposition by regulating gene expression. The most obvious target gene for the enhancer element is MYC, residing ~335 kb downstream, however further studies are required to establish the transcriptional target(s) of the predicted enhancer element.
Resumo:
Life is the result of the execution of molecular programs: like how an embryo is fated to become a human or a whale, or how a person’s appearance is inherited from their parents, many biological phenomena are governed by genetic programs written in DNA molecules. At the core of such programs is the highly reliable base pairing interaction between nucleic acids. DNA nanotechnology exploits the programming power of DNA to build artificial nanostructures, molecular computers, and nanomachines. In particular, DNA origami—which is a simple yet versatile technique that allows one to create various nanoscale shapes and patterns—is at the heart of the technology. In this thesis, I describe the development of programmable self-assembly and reconfiguration of DNA origami nanostructures based on a unique strategy: rather than relying on Watson-Crick base pairing, we developed programmable bonds via the geometric arrangement of stacking interactions, which we termed stacking bonds. We further demonstrated that such bonds can be dynamically reconfigurable.
The first part of this thesis describes the design and implementation of stacking bonds. Our work addresses the fundamental question of whether one can create diverse bond types out of a single kind of attractive interaction—a question first posed implicitly by Francis Crick while seeking a deeper understanding of the origin of life and primitive genetic code. For the creation of multiple specific bonds, we used two different approaches: binary coding and shape coding of geometric arrangement of stacking interaction units, which are called blunt ends. To construct a bond space for each approach, we performed a systematic search using a computer algorithm. We used orthogonal bonds to experimentally implement the connection of five distinct DNA origami nanostructures. We also programmed the bonds to control cis/trans configuration between asymmetric nanostructures.
The second part of this thesis describes the large-scale self-assembly of DNA origami into two-dimensional checkerboard-pattern crystals via surface diffusion. We developed a protocol where the diffusion of DNA origami occurs on a substrate and is dynamically controlled by changing the cationic condition of the system. We used stacking interactions to mediate connections between the origami, because of their potential for reconfiguring during the assembly process. Assembling DNA nanostructures directly on substrate surfaces can benefit nano/microfabrication processes by eliminating a pattern transfer step. At the same time, the use of DNA origami allows high complexity and unique addressability with six-nanometer resolution within each structural unit.
The third part of this thesis describes the use of stacking bonds as dynamically breakable bonds. To break the bonds, we used biological machinery called the ParMRC system extracted from bacteria. The system ensures that, when a cell divides, each daughter cell gets one copy of the cell’s DNA by actively pushing each copy to the opposite poles of the cell. We demonstrate dynamically expandable nanostructures, which makes stacking bonds a promising candidate for reconfigurable connectors for nanoscale machine parts.
Resumo:
A obtenção de imagens usando tomografia computadorizada revolucionou o diagnóstico de doenças na medicina e é usada amplamente em diferentes áreas da pesquisa científica. Como parte do processo de obtenção das imagens tomográficas tridimensionais um conjunto de radiografias são processadas por um algoritmo computacional, o mais usado atualmente é o algoritmo de Feldkamp, David e Kress (FDK). Os usos do processamento paralelo para acelerar os cálculos em algoritmos computacionais usando as diferentes tecnologias disponíveis no mercado têm mostrado sua utilidade para diminuir os tempos de processamento. No presente trabalho é apresentada a paralelização do algoritmo de reconstrução de imagens tridimensionais FDK usando unidades gráficas de processamento (GPU) e a linguagem CUDA-C. São apresentadas as GPUs como uma opção viável para executar computação paralela e abordados os conceitos introdutórios associados à tomografia computadorizada, GPUs, CUDA-C e processamento paralelo. A versão paralela do algoritmo FDK executada na GPU é comparada com uma versão serial do mesmo, mostrando maior velocidade de processamento. Os testes de desempenho foram feitos em duas GPUs de diferentes capacidades: a placa NVIDIA GeForce 9400GT (16 núcleos) e a placa NVIDIA Quadro 2000 (192 núcleos).
Resumo:
Computer-assisted topology predictions are widely used to build low-resolution structural models of integral membrane proteins (IMPs). Experimental validation of these models by traditional methods is labor intensive and requires modifications that might alter the IMP native conformation. This work employs oxidative labeling coupled with mass spectrometry (MS) as a validation tool for computer-generated topology models. ·OH exposure introduces oxidative modifications in solvent-accessible regions, whereas buried segments (e.g., transmembrane helices) are non-oxidizable. The Escherichia coli protein WaaL (O-antigen ligase) is predicted to have 12 transmembrane helices and a large extramembrane domain (Pérez et al., Mol. Microbiol. 2008, 70, 1424). Tryptic digestion and LC-MS/MS were used to map the oxidative labeling behavior of WaaL. Met and Cys exhibit high intrinsic reactivities with ·OH, making them sensitive probes for solvent accessibility assays. Overall, the oxidation pattern of these residues is consistent with the originally proposed WaaL topology. One residue (M151), however, undergoes partial oxidation despite being predicted to reside within a transmembrane helix. Using an improved computer algorithm, a slightly modified topology model was generated that places M151 closer to the membrane interface. On the basis of the labeling data, it is concluded that the refined model more accurately reflects the actual topology of WaaL. We propose that the combination of oxidative labeling and MS represents a useful strategy for assessing the accuracy of IMP topology predictions, supplementing data obtained in traditional biochemical assays. In the future, it might be possible to incorporate oxidative labeling data directly as constraints in topology prediction algorithms.
Resumo:
BACKGROUND: This study describes the prevalence, associated anomalies, and demographic characteristics of cases of multiple congenital anomalies (MCA) in 19 population-based European registries (EUROCAT) covering 959,446 births in 2004 and 2010. METHODS: EUROCAT implemented a computer algorithm for classification of congenital anomaly cases followed by manual review of potential MCA cases by geneticists. MCA cases are defined as cases with two or more major anomalies of different organ systems, excluding sequences, chromosomal and monogenic syndromes. RESULTS: The combination of an epidemiological and clinical approach for classification of cases has improved the quality and accuracy of the MCA data. Total prevalence of MCA cases was 15.8 per 10,000 births. Fetal deaths and termination of pregnancy were significantly more frequent in MCA cases compared with isolated cases (p < 0.001) and MCA cases were more frequently prenatally diagnosed (p < 0.001). Live born infants with MCA were more often born preterm (p < 0.01) and with birth weight < 2500 grams (p < 0.01). Respiratory and ear, face, and neck anomalies were the most likely to occur with other anomalies (34% and 32%) and congenital heart defects and limb anomalies were the least likely to occur with other anomalies (13%) (p < 0.01). However, due to their high prevalence, congenital heart defects were present in half of all MCA cases. Among males with MCA, the frequency of genital anomalies was significantly greater than the frequency of genital anomalies among females with MCA (p < 0.001). CONCLUSION: Although rare, MCA cases are an important public health issue, because of their severity. The EUROCAT database of MCA cases will allow future investigation on the epidemiology of these conditions and related clinical and diagnostic problems.
Resumo:
A partir de uma adaptação da metodologia de Osler e Chang (1995), este trabalho avalia, empiricamente, a lucratividade de estratégias de investimento baseadas na identificação do padrão gráfico de Análise Técnica Ombro-Cabeça-Ombro no mercado de ações brasileiro. Para isso, foram definidas diversas estratégias de investimento condicionais à identificação de padrões Ombro-Cabeça- Ombro (em suas formas padrão e invertida), por um algoritmo computadorizado, em séries diárias de preços de 47 ações no período de janeiro de 1994 a agosto de 2006. Para testar o poder de previsão de cada estratégia, foram construídos intervalos de confiança, a partir da técnica Bootstrap de inferência amostral, consistentes com a hipótese nula de que, baseado apenas em dados históricos, não é possível criar estratégias com retornos positivos. Mais especificamente, os retornos médios obtidos por cada estratégia nas séries de preços das ações, foram comparados àqueles obtidos pelas mesmas estratégias aplicadas a 1.000 séries de preços artificiais - para cada ação - geradas segundo dois modelos de preços de ações largamente utilizados: Random Walk e E-GARCH. De forma geral, os resultados encontrados mostram que é possível criar estratégias condicionais à realização dos padrões Ombro- Cabeça-Ombro com retornos positivos, indicando que esses padrões conseguem capturar nas séries históricas de preços de ações sinais a respeito da sua movimentação futura de preços, que não são explicados nem por um Random Walk e nem por um E-GARCH. No entanto, se levados em consideração os efeitos das taxas e dos custos de transação, dependendo das suas magnitudes, essas conclusões somente se mantêm para o padrão na sua forma invertida
Resumo:
The pumping of fluids in pipelines is the most economic and safe form of transporting fluids. That explains why in Europe there was in 1999 about 30.000 Km [7] of pipelines of several diameters, transporting millíons of cubic meters of crude oil end refined products, belonging to COCAWE (assaciation of companies of petroleum of Europe for health, environment and safety, that joint several petroleum companies). In Brazil they are about 18.000 Km of pipelines transporting millions of cubic meters of liquids and gases. In 1999, nine accidents were registered to COCAWE. Among those accidents one brought a fatal victim. The oil loss was of 171 m3, equivalent to O,2 parts per million of the total of the transported volume. Same considering the facts mentioned the costs involved in ao accident can be high. An accident of great proportions can bríng loss of human lives, severe environmental darnages, loss of drained product, loss . for dismissed profit and damages to the image of the company high recovery cost. In consonance with that and in some cases for legal demands, the companies are, more and more, investing in systems of Leak detection in pipelines based on computer algorithm that operate in real time, seeking wíth that to minimize still more the drained volumes. This decreases the impacts at the environment and the costs. In general way, all the systems based on softWare present some type of false alarm. In general a commitment exists betWeen the sensibílity of the system and the number of false alarms. This work has as objective make a review of thé existent methods and to concentrate in the analysis of a specific system, that is, the system based on hydraulic noise, Pressure Point Analyzis (PPA). We will show which are the most important aspects that must be considered in the implementation of a Leak Detection System (LDS), from the initial phase of the analysis of risks passing by the project bases, design, choice of the necessary field instrumentation to several LDS, implementation and tests. We Will make na analysis of events (noises) originating from the flow system that can be generator of false alarms and we will present a computer algorithm that restricts those noises automatically
Resumo:
To study the role played by acetate metabolism during high-cell-density growth of Escherichia coli cells, we constructed isogenic null mutants of strain W3100 deficient for several genes involved either in acetate metabolism or the transition to stationary phase. We grew these strains under identical fed-batch conditions to the highest cell densities achievable in 8 h using a predictive-plus-feedback-controlled computer algorithm that maintained glucose at a set-point of 0.5 g/l, as previously described. Wild-type strains, as well as mutants lacking the sigma(s) subunit of RNA polymerase (rpoS), grew reproducibly to high cell densities (44-50 g/l dry cell weights, DCWs). In contrast, a strain lacking acetate kinase (ackA) failed to reach densities greater than 8 g/l. Strains lacking other acetate metabolism genes (pta, acs, poxB, iciR, and fadR) achieved only medium cell densities (15-21 g/l DCWs). Complementation of either the acs or the ackA mutant restored wild-type high-cell-density growth, on a dry weight basis, poxB and fadR strains produced approximately threefold more acetate than did the wild-type strain. In contrast, the pta, acs, or rpoS strains produced significantly less acetate per cell dry weight than did the wild-type strain. Our results show that acetate metabolism plays a critical role during growth of E. coli cultures to high cell densities. They also demonstrate that cells do not require the sigma(s) regulon to grow to high cell densities, at least not under the conditions tested.
Resumo:
Pós-graduação em Engenharia Elétrica - FEB