929 resultados para Source Code Analysis


Relevância:

30.00% 30.00%

Publicador:

Resumo:

The catalytic properties of enzymes are usually evaluated by measuring and analyzing reaction rates. However, analyzing the complete time course can be advantageous because it contains additional information about the properties of the enzyme. Moreover, for systems that are not at steady state, the analysis of time courses is the preferred method. One of the major barriers to the wide application of time courses is that it may be computationally more difficult to extract information from these experiments. Here the basic approach to analyzing time courses is described, together with some examples of the essential computer code to implement these analyses. A general method that can be applied to both steady state and non-steady-state systems is recommended. (C) 2001 academic Press.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Recent studies have demonstrated the occurrence of elevated levels of higher chlorinated PCDDs in the coastal environment of Queensland, Australia. This study presents new data for OCDD contamination and full PCDD/F profile analysis in the environment of Queensland. Marine sediments, irrigation drain sediments and topsoil were collected from sites that were expected to be influenced by specific land-use types. High OCDD concentrations were associated mainly with sediments collected near the mouth of rivers which drain into large catchments in the tropical and subtropical regions. Further, analysis of sediments from irrigation drains could be clearly differentiated on the basis of OCDD contamination, with high concentrations in samples from sugarcane drains collected from coastal regions, and low concentrations in drain sediments from drier inland cotton growing areas. PCDD/F congener-specific analysis demonstrated almost identical congener profiles in all samples collected along the coastline. This indicates the source to be widespread. Profiles were dominated by higher chlorinated PCDDs, in particular OCDD whereas 2,3,7,8-substituted PCDFs were below the limit of quantification in the majority of samples. The full PCDD/F profile analysis of samples strongly resemble those reported for lake sediments from Mississippi and kaolinite samples from Germany, Strong similarities to these samples with respect to congener profiles and isomer patterns may indicate the presence of a similar source and/or formation process that is yet unidentified. (C) 2001 Elsevier Science Ltd. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The development of the new TOGA (titration and off-gas analysis) sensor for the detailed study of biological processes in wastewater treatment systems is outlined. The main innovation of the sensor is the amalgamation of titrimetric and off-gas measurement techniques. The resulting measured signals are: hydrogen ion production rate (HPR), oxygen transfer rate (OTR), nitrogen transfer rate (NTR), and carbon dioxide transfer rate (CTR). While OTR and NTR are applicable to aerobic and anoxic conditions, respectively, HPR and CTR are useful signals under all of the conditions found in biological wastewater treatment systems, namely, aerobic, anoxic and anaerobic. The sensor is therefore a powerful tool for studying the key biological processes under all these conditions. A major benefit from the integration of the titrimetric and off-gas analysis methods is that the acid/base buffering systems, in particular the bicarbonate system, are properly accounted for. Experimental data resulting from the TOGA sensor in aerobic, anoxic, and anaerobic conditions demonstrates the strength of the new sensor. In the aerobic environment, carbon oxidation (using acetate as an example carbon source) and nitrification are studied. Both the carbon and ammonia removal rates measured by the sensor compare very well with those obtained from off-line chemical analysis. Further, the aerobic acetate removal process is examined at a fundamental level using the metabolic pathway and stoichiometry established in the literature, whereby the rate of formation of storage products is identified. Under anoxic conditions, the denitrification process is monitored and, again, the measured rate of nitrogen gas transfer (NTR) matches well with the removal of the oxidised nitrogen compounds (measured chemically). In the anaerobic environment, the enhanced biological phosphorus process was investigated. In this case, the measured sensor signals (HPR and CTR) resulting from acetate uptake were used to determine the ratio of the rates of carbon dioxide production by competing groups of microorganisms, which consequently is a measure of the activity of these organisms. The sensor involves the use of expensive equipment such as a mass spectrometer and requires special gases to operate, thus incurring significant capital and operational costs. This makes the sensor more an advanced laboratory tool than an on-line sensor. (C) 2003 Wiley Periodicals, Inc.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Observations of an insect's movement lead to theory on the insect's flight behaviour and the role of movement in the species' population dynamics. This theory leads to predictions of the way the population changes in time under different conditions. If a hypothesis on movement predicts a specific change in the population, then the hypothesis can be tested against observations of population change. Routine pest monitoring of agricultural crops provides a convenient source of data for studying movement into a region and among fields within a region. Examples of the use of statistical and computational methods for testing hypotheses with such data are presented. The types of questions that can be addressed with these methods and the limitations of pest monitoring data when used for this purpose are discussed. (C) 2002 Elsevier Science B.V. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The majority of the world's population now resides in urban environments and information on the internal composition and dynamics of these environments is essential to enable preservation of certain standards of living. Remotely sensed data, especially the global coverage of moderate spatial resolution satellites such as Landsat, Indian Resource Satellite and Systeme Pour I'Observation de la Terre (SPOT), offer a highly useful data source for mapping the composition of these cities and examining their changes over time. The utility and range of applications for remotely sensed data in urban environments could be improved with a more appropriate conceptual model relating urban environments to the sampling resolutions of imaging sensors and processing routines. Hence, the aim of this work was to take the Vegetation-Impervious surface-Soil (VIS) model of urban composition and match it with the most appropriate image processing methodology to deliver information on VIS composition for urban environments. Several approaches were evaluated for mapping the urban composition of Brisbane city (south-cast Queensland, Australia) using Landsat 5 Thematic Mapper data and 1:5000 aerial photographs. The methods evaluated were: image classification; interpretation of aerial photographs; and constrained linear mixture analysis. Over 900 reference sample points on four transects were extracted from the aerial photographs and used as a basis to check output of the classification and mixture analysis. Distinctive zonations of VIS related to urban composition were found in the per-pixel classification and aggregated air-photo interpretation; however, significant spectral confusion also resulted between classes. In contrast, the VIS fraction images produced from the mixture analysis enabled distinctive densities of commercial, industrial and residential zones within the city to be clearly defined, based on their relative amount of vegetation cover. The soil fraction image served as an index for areas being (re)developed. The logical match of a low (L)-resolution, spectral mixture analysis approach with the moderate spatial resolution image data, ensured the processing model matched the spectrally heterogeneous nature of the urban environments at the scale of Landsat Thematic Mapper data.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Lucerne (Medicago sativa L.) is autotetraploid, and predominantly allogamous. This complex breeding structure maximises the genetic diversity within lucerne populations making it difficult to genetically discriminate between populations. The objective of this study was to evaluate the level of random genetic diversity within and between a selection of Australian-grown lucerne cultivars, with tetraploid M. falcata included as a possible divergent control source. This diversity was evaluated using random amplified polymorphic DNA (RAPDs). Nineteen plants from each of 10 cultivars were analysed. Using 11 RAPD primers, 96 polymorphic bands were scored as present or absent across the 190 individuals. Genetic similarity estimates (GSEs) of all pair-wise comparisons were calculated from these data. Mean GSEs within cultivars ranged from 0.43 to 0.51. Cultivar Venus (0.43) had the highest level of intra-population genetic diversity and cultivar Sequel HR (0.51) had the lowest level of intra-population genetic diversity. Mean GSEs between cultivars ranged from 0.31 to 0.49, which overlapped with values obtained for within-cultivar GSE, thus not allowing separation of the cultivars. The high level of intra- and inter-population diversity that was detected is most likely due to the breeding of synthetic cultivars using parents derived from a number of diverse sources. Cultivar-specific polymorphisms were only identified in the M. falcata source, which like M. sativa, is outcrossing and autotetraploid. From a cluster analysis and a principal components analysis, it was clear that M. falcata was distinct from the other cultivars. The results indicate that the M. falcata accession tested has not been widely used in Australian lucerne breeding programs, and offers a means of introducing new genetic diversity into the lucerne gene pool. This provides a means of maximising heterozygosity, which is essential to maximising productivity in lucerne.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A general overview of the protein sequence set for the mouse transcriptome produced during the FANTOM2 sequencing project is presented here. We applied different algorithms to characterize protein sequences derived from a nonredundant representative protein set (RPS) and a variant protein set (VPS) of the mouse transcriptome. The functional characterization and assignment of Gene Ontology terms was done by analysis of the proteome using InterPro. The Superfamily database analyses gave a detailed structural classification according to SCOP and provide additional evidence for the functional characterization of the proteome data. The MDS database analysis revealed new domains which are not presented in existing protein domain databases. Thus the transcriptome gives us a unique source of data for the detection of new functional groups. The data obtained for the RPS and VPS sets facilitated the comparison of different patterns of protein expression. A comparison of other existing mouse and human protein sequence sets (e.g., the International Protein Index) demonstrates the common patterns in mammalian proteornes. The analysis of the membrane organization within the transcriptome of multiple eukaryotes provides valuable statistics about the distribution of secretory and transmembrane proteins

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In Part 1 of this paper a methodology for back-to-back testing of simulation software was described. Residuals with error-dependent geometric properties were generated. A set of potential coding errors was enumerated, along with a corresponding set of feature matrices, which describe the geometric properties imposed on the residuals by each of the errors. In this part of the paper, an algorithm is developed to isolate the coding errors present by analysing the residuals. A set of errors is isolated when the subspace spanned by their combined feature matrices corresponds to that of the residuals. Individual feature matrices are compared to the residuals and classified as 'definite', 'possible' or 'impossible'. The status of 'possible' errors is resolved using a dynamic subset testing algorithm. To demonstrate and validate the testing methodology presented in Part 1 and the isolation algorithm presented in Part 2, a case study is presented using a model for biological wastewater treatment. Both single and simultaneous errors that are deliberately introduced into the simulation code are correctly detected and isolated. Copyright (C) 2003 John Wiley Sons, Ltd.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Este trabalho apresenta os conceitos de tradução de forma ampla, rompendo com a concepção de transformação de uma mensagem em um determinado código linguístico para outro, na qual busca-se apenas uma equivalência literal. Objetiva-se discutir a partir de teóricos em torno das teorias pós-coloniais e desconstrucionistas como Bhabha (2010), Hall (2006), Derrida (2006), Ottoni (2005), Orlandi (2008) e Niranjana (2011), a possibilidade de reposicionar a tradução, entendendo-a como ferramenta capaz de desconstruir paradigmas dominantes e, assim, recontar histórias, constituindo-se pontes plásticas. Assim, buscar-se-á pensar a tradução, na sua plasticidade, promovendo pontes que manifestam os cruzamentos existentes entre línguas, evidenciando o homem como cruzamento de diversos sujeitos. Como objeto empírico de análise, utilizaremos três verbetes presentes no site Wikipédia, nos quais traços da identidade brasileira estão envolvidos e investigar-se-á, a partir da leitura estereoscópica e da Teoria da Relevância proposta por Speber & Wilson (2001), os nuances nos processos tradutórios que podem contribuir para a forma como o Brasil é representado tanto pelas comunidades locais quanto estrangeiras, levando à constituição de estereótipos. A pesquisa aqui proposta se mostra relevante, uma vez que, objetiva estudar um corpus pouco explorado academicamente, mas em voga socialmente, já que o site Wikipédia se encontra entre os dez mais acessados da atualidade, com mais de 300 milhões de acessos únicos, além da importância de ser uma fonte de construção coletiva do pensamento, reinventando o conceito de enciclopédia. A pesquisa também se torna relevante sob a ótica dos estudos da tradução já que discorre sobre a importância do tradutor como ferramenta de formação de identidades sejam nacionais, culturais e sociais, além de propor o conceito híbrido de tradução-resenha, como uma tendência ao mundo multilingual contemporâneo

Relevância:

30.00% 30.00%

Publicador:

Resumo:

O objectivo deste trabalho passa pelo desenvolvimento de uma ferramenta de simulação dinâmica de recursos rádio em LTE no sentido descendente, com recurso à Framework OMNeT++. A ferramenta desenvolvida permite realizar o planeamento das estações base, simulação e análise de resultados. São descritos os principais aspectos da tecnologia de acesso rádio, designadamente a arquitectura da rede, a codificação, definição dos recursos rádio, os ritmos de transmissão suportados ao nível de canal e o mecanismo de controlo de admissão. Foi definido o cenário de utilização de recursos rádio que inclui a definição de modelos de tráfego e de serviços orientados a pacotes e circuitos. Foi ainda considerado um cenário de referência para a verificação e validação do modelo de simulação. A simulação efectua-se ao nível de sistema, suportada por um modelo dinâmico, estocástico e orientado por eventos discretos de modo a contemplar os diferentes mecanismos característicos da tecnologia OFDMA. Os resultados obtidos permitem a análise de desempenho dos serviços, estações base e sistema ao nível do throughput médio da rede, throughput médio por eNodeB e throughput médio por móvel para além de permitir analisar o contributo de outros parâmetros designadamente, largura de banda, raio de cobertura, perfil dos serviços, esquema de modulação, entre outros. Dos resultados obtidos foi possível verificar que, considerando um cenário com estações base com raio de cobertura de 100 m obteve-se um throughput ao nível do utilizador final igual a 4.69494 Mbps, ou seja, 7 vezes superior quando comparado a estações base com raios de cobertura de 200m.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Reclaimed water from small wastewater treatment facilities in the rural areas of the Beira Interior region (Portugal) may constitute an alternative water source for aquifer recharge. A 21-month monitoring period in a constructed wetland treatment system has shown that 21,500 m(3) year(-1) of treated wastewater (reclaimed water) could be used for aquifer recharge. A GIS-based multi-criteria analysis was performed, combining ten thematic maps and economic, environmental and technical criteria, in order to produce a suitability map for the location of sites for reclaimed water infiltration. The areas chosen for aquifer recharge with infiltration basins are mainly composed of anthrosol with more than 1 m deep and fine sand texture, which allows an average infiltration velocity of up to 1 m d(-1). These characteristics will provide a final polishing treatment of the reclaimed water after infiltration (soil aquifer treatment (SAT)), suitable for the removal of the residual load (trace organics, nutrients, heavy metals and pathogens). The risk of groundwater contamination is low since the water table in the anthrosol areas ranges from 10 m to 50 m. Oil the other hand, these depths allow a guaranteed unsaturated area suitable for SAT. An area of 13,944 ha was selected for study, but only 1607 ha are suitable for reclaimed water infiltration. Approximately 1280 m(2) were considered enough to set up 4 infiltration basins to work in flooding and drying cycles.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The development of accurate mass spectrometry, enabling the identification of all the ions extracted from the ion source in a high current implanter is described. The spectrometry system uses two signals (x-y graphic), one proportional to the magnetic field (x-axes), taken from the high-voltage potential with an optic fiber system, and the other proportional to the beam current intensity (y-axes), taken from a beam-stop. The ion beam mass register in a mass spectrum of all the elements magnetically analyzed with the same radius and defined by a pair of analyzing slits as a function of their beam intensity is presented. The developed system uses a PC to control the displaying of the extracted beam mass spectrum, and also recording of all data acquired for posterior analysis. The operator uses a LabView code that enables the interfacing between an I/O board and the ion implanter. The experimental results from an ion implantation experiment are shown. (C) 2011 Elsevier B.V. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper analyzes DNA information using entropy and phase plane concepts. First, the DNA code is converted into a numerical format by means of histograms that capture DNA sequence length ranging from one up to ten bases. This strategy measures dynamical evolutions from 4 up to 410 signal states. The resulting histograms are analyzed using three distinct entropy formulations namely the Shannon, Rényie and Tsallis definitions. Charts of entropy versus sequence length are applied to a set of twenty four species, characterizing 486 chromosomes. The information is synthesized and visualized by adapting phase plane concepts leading to a categorical representation of chromosomes and species.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper studies the human DNA in the perspective of signal processing. Six wavelets are tested for analyzing the information content of the human DNA. By adopting real Shannon wavelet several fundamental properties of the code are revealed. A quantitative comparison of the chromosomes and visualization through multidimensional and dendograms is developed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper analyzes the DNA code of several species in the perspective of information content. For that purpose several concepts and mathematical tools are selected towards establishing a quantitative method without a priori distorting the alphabet represented by the sequence of DNA bases. The synergies of associating Gray code, histogram characterization and multidimensional scaling visualization lead to a collection of plots with a categorical representation of species and chromosomes.