62 resultados para Chain sequences


Relevância:

20.00% 20.00%

Publicador:

Resumo:

As stated in Aitchison (1986), a proper study of relative variation in a compositional data set should be based on logratios, and dealing with logratios excludes dealing with zeros. Nevertheless, it is clear that zero observations might be present in real data sets, either because the corresponding part is completelyabsent –essential zeros– or because it is below detection limit –rounded zeros. Because the second kind of zeros is usually understood as “a trace too small to measure”, it seems reasonable to replace them by a suitable small value, and this has been the traditional approach. As stated, e.g. by Tauber (1999) and byMartín-Fernández, Barceló-Vidal, and Pawlowsky-Glahn (2000), the principal problem in compositional data analysis is related to rounded zeros. One should be careful to use a replacement strategy that does not seriously distort the general structure of the data. In particular, the covariance structure of the involvedparts –and thus the metric properties– should be preserved, as otherwise further analysis on subpopulations could be misleading. Following this point of view, a non-parametric imputation method isintroduced in Martín-Fernández, Barceló-Vidal, and Pawlowsky-Glahn (2000). This method is analyzed in depth by Martín-Fernández, Barceló-Vidal, and Pawlowsky-Glahn (2003) where it is shown that thetheoretical drawbacks of the additive zero replacement method proposed in Aitchison (1986) can be overcome using a new multiplicative approach on the non-zero parts of a composition. The new approachhas reasonable properties from a compositional point of view. In particular, it is “natural” in the sense thatit recovers the “true” composition if replacement values are identical to the missing values, and it is coherent with the basic operations on the simplex. This coherence implies that the covariance structure of subcompositions with no zeros is preserved. As a generalization of the multiplicative replacement, in thesame paper a substitution method for missing values on compositional data sets is introduced

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Realistic rendering animation is known to be an expensive processing task when physically-based global illumination methods are used in order to improve illumination details. This paper presents an acceleration technique to compute animations in radiosity environments. The technique is based on an interpolated approach that exploits temporal coherence in radiosity. A fast global Monte Carlo pre-processing step is introduced to the whole computation of the animated sequence to select important frames. These are fully computed and used as a base for the interpolation of all the sequence. The approach is completely view-independent. Once the illumination is computed, it can be visualized by any animated camera. Results present significant high speed-ups showing that the technique could be an interesting alternative to deterministic methods for computing non-interactive radiosity animations for moderately complex scenarios

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A cultivation-independent approach based on polymerase chain reaction (PCR)-amplified partial small subunit rRNA genes was used to characterize bacterial populations in the surface soil of a commercial pear orchard consisting of different pear cultivars during two consecutive growing seasons. Pyrus communis L. cvs Blanquilla, Conference, and Williams are among the most widely cultivated cultivars in Europe and account for the majority of pear production in Northeastern Spain. To assess the heterogeneity of the community structure in response to environmental variables and tree phenology, bacterial populations were examined using PCR-denaturing gradient gel electrophoresis (DGGE) followed by cluster analysis of the 16S ribosomal DNA profiles by means of the unweighted pair group method with arithmetic means. Similarity analysis of the band patterns failed to identify characteristic fingerprints associated with the pear cultivars. Both environmentally and biologically based principal-component analyses showed that the microbial communities changed significantly throughout the year depending on temperature and, to a lesser extent, on tree phenology and rainfall. Prominent DGGE bands were excised and sequenced to gain insight into the identities of the predominant bacterial populations. Most DGGE band sequences were related to bacterial phyla, such as Bacteroidetes, Cyanobacteria, Acidobacteria, Proteobacteria, Nitrospirae, and Gemmatimonadetes, previously associated with typical agronomic crop environments

Relevância:

20.00% 20.00%

Publicador:

Resumo:

One of the first useful products from the human genome will be a set of predicted genes. Besides its intrinsic scientific interest, the accuracy and completeness of this data set is of considerable importance for human health and medicine. Though progress has been made on computational gene identification in terms of both methods and accuracy evaluation measures, most of the sequence sets in which the programs are tested are short genomic sequences, and there is concern that these accuracy measures may not extrapolate well to larger, more challenging data sets. Given the absence of experimentally verified large genomic data sets, we constructed a semiartificial test set comprising a number of short single-gene genomic sequences with randomly generated intergenic regions. This test set, which should still present an easier problem than real human genomic sequence, mimics the approximately 200kb long BACs being sequenced. In our experiments with these longer genomic sequences, the accuracy of GENSCAN, one of the most accurate ab initio gene prediction programs, dropped significantly, although its sensitivity remained high. Conversely, the accuracy of similarity-based programs, such as GENEWISE, PROCRUSTES, and BLASTX was not affected significantly by the presence of random intergenic sequence, but depended on the strength of the similarity to the protein homolog. As expected, the accuracy dropped if the models were built using more distant homologs, and we were able to quantitatively estimate this decline. However, the specificities of these techniques are still rather good even when the similarity is weak, which is a desirable characteristic for driving expensive follow-up experiments. Our experiments suggest that though gene prediction will improve with every new protein that is discovered and through improvements in the current set of tools, we still have a long way to go before we can decipher the precise exonic structure of every gene in the human genome using purely computational methodology.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The goals of the human genome project did not include sequencing of the heterochromatic regions. We describe here an initial sequence of 1.1 Mb of the short arm of human chromosome 21 (HSA21p), estimated to be 10% of 21p. This region contains extensive euchromatic-like sequence and includes on average one transcript every 100 kb. These transcripts show multiple inter- and intrachromosomal copies, and extensive copy number and sequence variability. The sequencing of the "heterochromatic" regions of the human genome is likely to reveal many additional functional elements and provide important evolutionary information.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The construction of metagenomic libraries has permitted the study of microorganisms resistant to isolation and the analysis of 16S rDNA sequences has been used for over two decades to examine bacterial biodiversity. Here, we show that the analysis of random sequence reads (RSRs) instead of 16S is a suitable shortcut to estimate the biodiversity of a bacterial community from metagenomic libraries. We generated 10,010 RSRs from a metagenomic library of microorganisms found in human faecal samples. Then searched them using the program BLASTN against a prokaryotic sequence database to assign a taxon to each RSR. The results were compared with those obtained by screening and analysing the clones containing 16S rDNA sequences in the whole library. We found that the biodiversity observed by RSR analysis is consistent with that obtained by 16S rDNA. We also show that RSRs are suitable to compare the biodiversity between different metagenomic libraries. RSRs can thus provide a good estimate of the biodiversity of a metagenomic library and, as an alternative to 16S, this approach is both faster and cheaper.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The vast majority of the biology of a newly sequenced genome is inferred from the set of encoded proteins. Predicting this set is therefore invariably the first step after the completion of the genome DNA sequence. Here we review the main computational pipelines used to generate the human reference protein-coding gene sets.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Background: Despite the continuous production of genome sequence for a number of organisms,reliable, comprehensive, and cost effective gene prediction remains problematic. This is particularlytrue for genomes for which there is not a large collection of known gene sequences, such as therecently published chicken genome. We used the chicken sequence to test comparative andhomology-based gene-finding methods followed by experimental validation as an effective genomeannotation method.Results: We performed experimental evaluation by RT-PCR of three different computational genefinders, Ensembl, SGP2 and TWINSCAN, applied to the chicken genome. A Venn diagram wascomputed and each component of it was evaluated. The results showed that de novo comparativemethods can identify up to about 700 chicken genes with no previous evidence of expression, andcan correctly extend about 40% of homology-based predictions at the 5' end.Conclusions: De novo comparative gene prediction followed by experimental verification iseffective at enhancing the annotation of the newly sequenced genomes provided by standardhomology-based methods.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Sequential randomized prediction of an arbitrary binary sequence isinvestigated. No assumption is made on the mechanism of generating the bit sequence. The goal of the predictor is to minimize its relative loss, i.e., to make (almost) as few mistakes as the best ``expert'' in a fixed, possibly infinite, set of experts. We point out a surprising connection between this prediction problem and empirical process theory. First, in the special case of static (memoryless) experts, we completely characterize the minimax relative loss in terms of the maximum of an associated Rademacher process. Then we show general upper and lower bounds on the minimaxrelative loss in terms of the geometry of the class of experts. As main examples, we determine the exact order of magnitude of the minimax relative loss for the class of autoregressive linear predictors and for the class of Markov experts.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper explores the integration process that firms follow to implementSupply Chain Management (SCM) and the main barriers and benefits relatedto this strategy. This study has been inspired in the SCM literature,especially in the logistics integration model by Stevens [1]. Due to theexploratory nature of this paper and the need to obtain an in depthknowledge of the SCM development in the Spanish grocery sector, we used thecase study methodology. A multiple case study analysis based on interviewswith leading manufacturers and retailers was conducted.The results of this analysis suggest that firms seem to follow the integration process proposed by Stevens, integrating internally first, andthen, extending this integration to other supply chain members. The casesalso show that Spanish manufacturers, in general, seem to have a higherlevel of SCM development than Spanish retailers. Regarding the benefitsthat SCM can bring, most of the companies identify the general objectivesof cost and stock reductions and service improvements. However, withrespect to the barriers found in its implementation, retailers andmanufacturers are not coincident: manufacturers seem to see more barrierswith respect to aspects related to the other party, such as distrust and alack of culture of sharing information, while retailers find as mainbarriers the need of a know-how , the company culture and the historyand habits.

Relevância:

20.00% 20.00%

Publicador:

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In today s highly competitive and global marketplace the pressure onorganizations to find new ways to create and deliver value to customersgrows ever stronger. In the last two decades, logistics and supply chainhas moved to the center stage. There has been a growing recognition thatit is through an effective management of the logistics function and thesupply chain that the goal of cost reduction and service enhancement canbe achieved. The key to success in Supply Chain Management (SCM) requireheavy emphasis on integration of activities, cooperation, coordination andinformation sharing throughout the entire supply chain, from suppliers tocustomers. To be able to respond to the challenge of integration there isthe need of sophisticated decision support systems based on powerfulmathematical models and solution techniques, together with the advancesin information and communication technologies. The industry and the academiahave become increasingly interested in SCM to be able to respond to theproblems and issues posed by the changes in the logistics and supply chain.We present a brief discussion on the important issues in SCM. We then arguethat metaheuristics can play an important role in solving complex supplychain related problems derived by the importance of designing and managingthe entire supply chain as a single entity. We will focus specially on theIterated Local Search, Tabu Search and Scatter Search as the ones, but notlimited to, with great potential to be used on solving the SCM relatedproblems. We will present briefly some successful applications.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper analyses the interaction of two topics: Supply Chain Management (SCM) andInternet. Merging these two fields is a key area of concern for contemporary managers andresearchers. They have realized that Internet can enhance SCM by making real timeinformation available and enabling collaboration between trading partners. The aim of thispaper is to define e-SCM, analyze how research in this area has evolved during the period1995-2003 and identify some lines of further research. To do that a literature review inprestigious academic journals in Operations Management and Logistics has beenconducted.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Adversarial relationships have long dominated business relationships,but Supply Chain Management (SCM) entails a new perspective. SCM requiresa movement away from arms-length relationships toward partnership stylerelations. SCM involves integration, co-ordination and collaborationacross organisations and throughout the supply chain. It means that SCMrequires internal (intraorganisational) and external (interorganisational)integration. This paper analyses the relationship between internal andexternal integration processes, their effect on firms performance andtheir contribution to the achievement of a competitive advantage.Performance improvements are analysed through costs, stock out and leadtime reductions. And, the achievement of a better competitive positionis measured by comparing the firm s performance with its competitors performance. To analyse this, an empirical study has been conducted inthe Spanish grocery sector.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We consider adaptive sequential lossy coding of bounded individual sequences when the performance is measured by the sequentially accumulated mean squared distortion. Theencoder and the decoder are connected via a noiseless channel of capacity $R$ and both are assumed to have zero delay. No probabilistic assumptions are made on how the sequence to be encoded is generated. For any bounded sequence of length $n$, the distortion redundancy is defined as the normalized cumulative distortion of the sequential scheme minus the normalized cumulative distortion of the best scalarquantizer of rate $R$ which is matched to this particular sequence. We demonstrate the existence of a zero-delay sequential scheme which uses common randomization in the encoder and the decoder such that the normalized maximum distortion redundancy converges to zero at a rate $n^{-1/5}\log n$ as the length of the encoded sequence $n$ increases without bound.