823 resultados para Pipeline Failiure
Resumo:
Virtual-build-to-order (VBTO) is a form of order fulfilment system in which the producer has the ability to search across the entire pipeline of finished stock, products in production and those in the production plan, in order to find the best product for a customer. It is a system design that is attractive to Mass Customizers, such as those in the automotive sector, whose manufacturing lead time exceeds their customers' tolerable waiting times, and for whom the holding of partly-finished stocks at a fixed decoupling point is unattractive or unworkable. This paper describes and develops the operational concepts that underpin VBTO, in particular the concepts of reconfiguration flexibility and customer aversion to waiting. Reconfiguration is the process of changing a product's specification at any point along the order fulfilment pipeline. The extent to which an order fulfilment system is flexible or inflexible reveals itself in the reconfiguration cost curve, of which there are four basic types. The operational features of the generic VBTO system are described and simulation is used to study its behaviour and performance. The concepts of reconfiguration flexibility and floating decoupling point are introduced and discussed.
Resumo:
Virtual-Build-to-Order (VBTO) is an emerging order fulfilment system within the automotive sector that is intended to improve fulfilment performance by taking advantage of integrated information systems. The primary innovation in VBTO systems is the ability to make available all unsold products that are in the production pipeline to all customers. In a conventional system the pipeline is inaccessible and a customer can be fulfilled by a product from stock or having a product Built-to-Order (BTO), whereas in a VBTO system a customer can be fulfilled by a product from stock, by being allocated a product in the pipeline, or by a build-to-order product. Simulation is used to investigate and profile the fundamental behaviour of the basic VBTO system and to compare it to a Conventional system. A predictive relationship is identified, between the proportions of customers fulfilled through each mechanism and the ratio of product variety / pipeline length. The simulations reveal that a VBTO system exhibits inherent behaviour that alters the stock mix and levels, leading to stock levels being higher than in an equivalent conventional system at certain variety / pipeline ratios. The results have implications for the design and management of order fulfilment systems in sectors such as automotive where VBTO is a viable operational model.
Resumo:
Background: Expressed Sequence Tags (ESTs) are in general used to gain a first insight into gene activities from a species of interest. Subsequently, and typically based on a combination of EST and genome sequences, microarray-based expression analyses are performed for a variety of conditions. In some cases, a multitude of EST and microarray experiments are conducted for one species, covering different tissues, cell states, and cell types. Under these circumstances, the challenge arises to combine results derived from the different expression profiling strategies, with the goal to uncover novel information on the basis of the integrated datasets. Findings: Using our new analysis tool, MediPlEx (MEDIcago truncatula multiPLe EXpression analysis), expression data from EST experiments, oligonucleotide microarrays and Affymetrix GeneChips® can be combined and analyzed, leading to a novel approach to integrated transcriptome analysis. We have validated our tool via the identification of a set of well-characterized AM-specific and AM-induced marker genes, identified by MediPlEx on the basis of in silico and experimental gene expression profiles from roots colonized with AM fungi. Conclusions: MediPlEx offers an integrated analysis pipeline for different sets of expression data generated for the model legume Medicago truncatula. As expected, in silico and experimental gene expression data that cover the same biological condition correlate well. The collection of differentially expressed genes identified via MediPlEx provides a starting point for functional studies in plant mutants.
Resumo:
This thesis proposes a generic visual perception architecture for robotic clothes perception and manipulation. This proposed architecture is fully integrated with a stereo vision system and a dual-arm robot and is able to perform a number of autonomous laundering tasks. Clothes perception and manipulation is a novel research topic in robotics and has experienced rapid development in recent years. Compared to the task of perceiving and manipulating rigid objects, clothes perception and manipulation poses a greater challenge. This can be attributed to two reasons: firstly, deformable clothing requires precise (high-acuity) visual perception and dexterous manipulation; secondly, as clothing approximates a non-rigid 2-manifold in 3-space, that can adopt a quasi-infinite configuration space, the potential variability in the appearance of clothing items makes them difficult to understand, identify uniquely, and interact with by machine. From an applications perspective, and as part of EU CloPeMa project, the integrated visual perception architecture refines a pre-existing clothing manipulation pipeline by completing pre-wash clothes (category) sorting (using single-shot or interactive perception for garment categorisation and manipulation) and post-wash dual-arm flattening. To the best of the author’s knowledge, as investigated in this thesis, the autonomous clothing perception and manipulation solutions presented here were first proposed and reported by the author. All of the reported robot demonstrations in this work follow a perception-manipulation method- ology where visual and tactile feedback (in the form of surface wrinkledness captured by the high accuracy depth sensor i.e. CloPeMa stereo head or the predictive confidence modelled by Gaussian Processing) serve as the halting criteria in the flattening and sorting tasks, respectively. From scientific perspective, the proposed visual perception architecture addresses the above challenges by parsing and grouping 3D clothing configurations hierarchically from low-level curvatures, through mid-level surface shape representations (providing topological descriptions and 3D texture representations), to high-level semantic structures and statistical descriptions. A range of visual features such as Shape Index, Surface Topologies Analysis and Local Binary Patterns have been adapted within this work to parse clothing surfaces and textures and several novel features have been devised, including B-Spline Patches with Locality-Constrained Linear coding, and Topology Spatial Distance to describe and quantify generic landmarks (wrinkles and folds). The essence of this proposed architecture comprises 3D generic surface parsing and interpretation, which is critical to underpinning a number of laundering tasks and has the potential to be extended to other rigid and non-rigid object perception and manipulation tasks. The experimental results presented in this thesis demonstrate that: firstly, the proposed grasp- ing approach achieves on-average 84.7% accuracy; secondly, the proposed flattening approach is able to flatten towels, t-shirts and pants (shorts) within 9 iterations on-average; thirdly, the proposed clothes recognition pipeline can recognise clothes categories from highly wrinkled configurations and advances the state-of-the-art by 36% in terms of classification accuracy, achieving an 83.2% true-positive classification rate when discriminating between five categories of clothes; finally the Gaussian Process based interactive perception approach exhibits a substantial improvement over single-shot perception. Accordingly, this thesis has advanced the state-of-the-art of robot clothes perception and manipulation.
Resumo:
The primary goal of systems biology is to integrate complex omics data, and data obtained from traditional experimental studies in order to provide a holistic understanding of organismal function. One way of achieving this aim is to generate genome-scale metabolic models (GEMs), which contain information on all metabolites, enzyme-coding genes, and biochemical reactions in a biological system. Drosophila melanogaster GEM has not been reconstructed to date. Constraint-free genome-wide metabolic model of the fruit fly has been reconstructed in our lab, identifying gaps, where no enzyme was identified and metabolites were either only produced or consume. The main focus of the work presented in this thesis was to develop a pipeline for efficient gap filling using metabolomics approaches combined with standard reverse genetics methods, using 5-hydroxyisourate hydrolase (5-HIUH) as an example. 5-HIUH plays a role in urate degradation pathway. Inability to degrade urate can lead to inborn errors of metabolism (IEMs) in humans, including hyperuricemia. Based on sequence analysis Drosophila CG30016 gene was hypothesised to encode 5- HIUH. CG30016 knockout flies were examined to identify Malpighian tubules phenotype, and shortened lifespan might reflect kidney disorders in hyperuricemia in humans. Moreover, LC-MS analysis of mutant tubules revealed that CG30016 is involved in purine metabolism, and specifically urate degradation pathway. However, the exact role of the gene has not been identified, and the complete method for gap filling has not been developed. Nevertheless, thanks to the work presented here, we are a step closer towards the development of a gap-filling pipeline in Drosophila melanogaster GEM. Importantly, the areas that require further optimisation were identified and are the focus of future research. Moreover, LC-MS analysis confirmed that tubules rather than the whole fly were more suitable for metabolomics analysis of purine metabolism. Previously, Dow/Davies lab has generated the most complete tissue-specific transcriptomic atlas for Drosophila – FlyAtlas.org, which provides data on gene expression across multiple tissues of adult fly and larva. FlyAtlas revealed that transcripts of many genes are enriched in specific Drosophila tissues, and that it is possible to deduce the functions of individual tissues within the fly. Based on FlyAtlas data, it has become clear that the fly (like other metazoan species) must be considered as a set of tissues, each 2 with its own distinct transcriptional and functional profile. Moreover, it revealed that for about 30% of the genome, reverse genetic methods (i.e. mutation in an unknown gene followed by observation of phenotype) are only useful if specific tissues are investigated. Based on the FlyAtlas findings, we aimed to build a primary tissue-specific metabolome of the fruit fly, in order to establish whether different Drosophila tissues have different metabolomes and if they correspond to tissue-specific transcriptome of the fruit fly (FlyAtlas.org). Different fly tissues have been dissected and their metabolome elucidated using LC-MS. The results confirmed that tissue metabolomes differ significantly from each other and from the whole fly, and that some of these differences can be correlated to the tissue function. The results illustrate the need to study individual tissues as well as the whole organism. It is clear that some metabolites that play an important role in a given tissue might not be detected in the whole fly sample because their abundance is much lower in comparison to other metabolites present in all tissues, which prevent the detection of the tissue-specific compound.
Resumo:
This thesis investigates how web search evaluation can be improved using historical interaction data. Modern search engines combine offline and online evaluation approaches in a sequence of steps that a tested change needs to pass through to be accepted as an improvement and subsequently deployed. We refer to such a sequence of steps as an evaluation pipeline. In this thesis, we consider the evaluation pipeline to contain three sequential steps: an offline evaluation step, an online evaluation scheduling step, and an online evaluation step. In this thesis we show that historical user interaction data can aid in improving the accuracy or efficiency of each of the steps of the web search evaluation pipeline. As a result of these improvements, the overall efficiency of the entire evaluation pipeline is increased. Firstly, we investigate how user interaction data can be used to build accurate offline evaluation methods for query auto-completion mechanisms. We propose a family of offline evaluation metrics for query auto-completion that represents the effort the user has to spend in order to submit their query. The parameters of our proposed metrics are trained against a set of user interactions recorded in the search engine’s query logs. From our experimental study, we observe that our proposed metrics are significantly more correlated with an online user satisfaction indicator than the metrics proposed in the existing literature. Hence, fewer changes will pass the offline evaluation step to be rejected after the online evaluation step. As a result, this would allow us to achieve a higher efficiency of the entire evaluation pipeline. Secondly, we state the problem of the optimised scheduling of online experiments. We tackle this problem by considering a greedy scheduler that prioritises the evaluation queue according to the predicted likelihood of success of a particular experiment. This predictor is trained on a set of online experiments, and uses a diverse set of features to represent an online experiment. Our study demonstrates that a higher number of successful experiments per unit of time can be achieved by deploying such a scheduler on the second step of the evaluation pipeline. Consequently, we argue that the efficiency of the evaluation pipeline can be increased. Next, to improve the efficiency of the online evaluation step, we propose the Generalised Team Draft interleaving framework. Generalised Team Draft considers both the interleaving policy (how often a particular combination of results is shown) and click scoring (how important each click is) as parameters in a data-driven optimisation of the interleaving sensitivity. Further, Generalised Team Draft is applicable beyond domains with a list-based representation of results, i.e. in domains with a grid-based representation, such as image search. Our study using datasets of interleaving experiments performed both in document and image search domains demonstrates that Generalised Team Draft achieves the highest sensitivity. A higher sensitivity indicates that the interleaving experiments can be deployed for a shorter period of time or use a smaller sample of users. Importantly, Generalised Team Draft optimises the interleaving parameters w.r.t. historical interaction data recorded in the interleaving experiments. Finally, we propose to apply the sequential testing methods to reduce the mean deployment time for the interleaving experiments. We adapt two sequential tests for the interleaving experimentation. We demonstrate that one can achieve a significant decrease in experiment duration by using such sequential testing methods. The highest efficiency is achieved by the sequential tests that adjust their stopping thresholds using historical interaction data recorded in diagnostic experiments. Our further experimental study demonstrates that cumulative gains in the online experimentation efficiency can be achieved by combining the interleaving sensitivity optimisation approaches, including Generalised Team Draft, and the sequential testing approaches. Overall, the central contributions of this thesis are the proposed approaches to improve the accuracy or efficiency of the steps of the evaluation pipeline: the offline evaluation frameworks for the query auto-completion, an approach for the optimised scheduling of online experiments, a general framework for the efficient online interleaving evaluation, and a sequential testing approach for the online search evaluation. The experiments in this thesis are based on massive real-life datasets obtained from Yandex, a leading commercial search engine. These experiments demonstrate the potential of the proposed approaches to improve the efficiency of the evaluation pipeline.
Resumo:
Wydział Biologii: Instytut Biologii Molekularnej i Biotechnologii
Resumo:
Optimisation of real world Variable Data printing (VDP) documents is a dicult problem because the interdependencies between layout functions may drastically reduce the number of invariant blocks that can be factored out for pre-rasterisation. This paper examines how speculative evaluation at an early stage in a document-preparation pipeline, provides a generic and effective method of optimising VDP documents that contain such interdependencies. Speculative evaluation will be at its most effective in speeding up print runs if sets of layout invariances can either be discovered automatically, or designed into the document at an early stage. In either case the expertise of the layout designer needs to be supplemented by expertise in exploiting potential invariances and also in predicting the effects of speculative evaluation on the caches used at various stages in the print production pipeline.
Resumo:
Providing high levels of product variety and product customization is challenging for many companies. This paper presents a new classification of production and order fulfillment approaches available to manufacturing companies that offer high variety and/or product customization. Six categories of approaches are identified and described. An important emerging approach - open pipeline planning – is highlighted for high variety manufacturing environments. It allows a customer order to be fulfilled from anywhere in the system, enabling greater responsiveness in Build-to-Forecast systems. The links between the open pipeline approach, decoupling concepts and postponement strategies are discussed and the relevance of the approach to the volume automotive sector is highlighted. Results from a simulation study are presented illustrating the potential benefits when products can be reconfigured in an open pipeline system. The application of open pipeline concepts to different manufacturing domains is discussed and the operating characteristics of most relevance are highlighted. In addition to the automotive, sectors such as machinery and instrumentation, computer servers, telecommunications and electronic equipment may benefit from an open pipeline planning approach. When properly designed these systems can significantly enhance order fulfillment performance.
Resumo:
Dissertação (mestrado)—Universidade de Brasília, Faculdade UnB Gama, Faculdade de Tecnologia, Programa de Pós-graduação em Integridade de Materiais da Engenharia, 2016.
Dinoflagellate Genomic Organization and Phylogenetic Marker Discovery Utilizing Deep Sequencing Data
Resumo:
Dinoflagellates possess large genomes in which most genes are present in many copies. This has made studies of their genomic organization and phylogenetics challenging. Recent advances in sequencing technology have made deep sequencing of dinoflagellate transcriptomes feasible. This dissertation investigates the genomic organization of dinoflagellates to better understand the challenges of assembling dinoflagellate transcriptomic and genomic data from short read sequencing methods, and develops new techniques that utilize deep sequencing data to identify orthologous genes across a diverse set of taxa. To better understand the genomic organization of dinoflagellates, a genomic cosmid clone of the tandemly repeated gene Alchohol Dehydrogenase (AHD) was sequenced and analyzed. The organization of this clone was found to be counter to prevailing hypotheses of genomic organization in dinoflagellates. Further, a new non-canonical splicing motif was described that could greatly improve the automated modeling and annotation of genomic data. A custom phylogenetic marker discovery pipeline, incorporating methods that leverage the statistical power of large data sets was written. A case study on Stramenopiles was undertaken to test the utility in resolving relationships between known groups as well as the phylogenetic affinity of seven unknown taxa. The pipeline generated a set of 373 genes useful as phylogenetic markers that successfully resolved relationships among the major groups of Stramenopiles, and placed all unknown taxa on the tree with strong bootstrap support. This pipeline was then used to discover 668 genes useful as phylogenetic markers in dinoflagellates. Phylogenetic analysis of 58 dinoflagellates, using this set of markers, produced a phylogeny with good support of all branches. The Suessiales were found to be sister to the Peridinales. The Prorocentrales formed a monophyletic group with the Dinophysiales that was sister to the Gonyaulacales. The Gymnodinales was found to be paraphyletic, forming three monophyletic groups. While this pipeline was used to find phylogenetic markers, it will likely also be useful for finding orthologs of interest for other purposes, for the discovery of horizontally transferred genes, and for the separation of sequences in metagenomic data sets.
Resumo:
Reconfigurable platforms are a promising technology that offers an interesting trade-off between flexibility and performance, which many recent embedded system applications demand, especially in fields such as multimedia processing. These applications typically involve multiple ad-hoc tasks for hardware acceleration, which are usually represented using formalisms such as Data Flow Diagrams (DFDs), Data Flow Graphs (DFGs), Control and Data Flow Graphs (CDFGs) or Petri Nets. However, none of these models is able to capture at the same time the pipeline behavior between tasks (that therefore can coexist in order to minimize the application execution time), their communication patterns, and their data dependencies. This paper proves that the knowledge of all this information can be effectively exploited to reduce the resource requirements and the timing performance of modern reconfigurable systems, where a set of hardware accelerators is used to support the computation. For this purpose, this paper proposes a novel task representation model, named Temporal Constrained Data Flow Diagram (TCDFD), which includes all this information. This paper also presents a mapping-scheduling algorithm that is able to take advantage of the new TCDFD model. It aims at minimizing the dynamic reconfiguration overhead while meeting the communication requirements among the tasks. Experimental results show that the presented approach achieves up to 75% of resources saving and up to 89% of reconfiguration overhead reduction with respect to other state-of-the-art techniques for reconfigurable platforms.
Resumo:
Gastric (GC) and breast (BrC) cancer are two of the most common and deadly tumours. Different lines of evidence suggest a possible causative role of viral infections for both GC and BrC. Wide genome sequencing (WGS) technologies allow searching for viral agents in tissues of patients with cancer. These technologies have already contributed to establish virus-cancer associations as well as to discovery new tumour viruses. The objective of this study was to document possible associations of viral infection with GC and BrC in Mexican patients. In order to gain idea about cost effective conditions of experimental sequencing, we first carried out an in silico simulation of WGS. The next-generation-platform IlluminaGallx was then used to sequence GC and BrC tumour samples. While we did not find viral sequences in tissues from BrC patients, multiple reads matching Epstein-Barr virus (EBV) sequences were found in GC tissues. An end-point polymerase chain reaction confirmed an enrichment of EBV sequences in one of the GC samples sequenced, validating the next-generation sequencing-bioinformatics pipeline.
Resumo:
Localised cutaneous leishmaniasis (LCL) is the most common form of cutaneous leishmaniasis characterised by single or multiple painless chronic ulcers, which commonly presents with secondary bacterial infection. Previous culture- based studies have found staphylococci, streptococci, and opportunistic pathogenic bacteria in LCL lesions, but there have been no comparisons to normal skin. In addition, this approach has strong bias for determining bacterial composition. The present study tested the hypothesis that bacterial communities in LCL lesions differ from those found on healthy skin (HS). Using a high throughput amplicon sequencing approach, which allows for better populational evaluation due to greater depth coverage and the Quantitative Insights Into Microbial Ecology pipeline, we compared the microbiological signature of LCL lesions with that of contralateral HS from the same individuals. Streptococcus , Staphylococcus , Fusobacterium and other strict or facultative anaerobic bacteria composed the LCL microbiome. Aerobic and facultative anaerobic bacteria found in HS, including environmental bacteria, were significantly decreased in LCL lesions (p < 0.01). This paper presents the first comprehensive microbiome identification from LCL lesions with next generation sequence methodology and shows a marked reduction of bacterial diversity in the lesions.
Resumo:
The transport of fluids through pipes is used in the oil industry, being the pipelines an important link in the logistics flow of fluids. However, the pipelines suffer deterioration in their walls caused by several factors which may cause loss of fluids to the environment, justifying the investment in techniques and methods of leak detection to minimize fluid loss and environmental damage. This work presents the development of a supervisory module in order to inform to the operator the leakage in the pipeline monitored in the shortest time possible, in order that the operator log procedure that entails the end of the leak. This module is a component of a system designed to detect leaks in oil pipelines using sonic technology, wavelets and neural networks. The plant used in the development and testing of the module presented here was the system of tanks of LAMP, and its LAN, as monitoring network. The proposal consists of, basically, two stages. Initially, assess the performance of the communication infrastructure of the supervisory module. Later, simulate leaks so that the DSP sends information to the supervisory performs the calculation of the location of leaks and indicate to which sensor the leak is closer, and using the system of tanks of LAMP, capture the pressure in the pipeline monitored by piezoresistive sensors, this information being processed by the DSP and sent to the supervisory to be presented to the user in real time