42 resultados para Scheduling algorithms and analysis
Resumo:
Data mining is the process to identify valid, implicit, previously unknown, potentially useful and understandable information from large databases. It is an important step in the process of knowledge discovery in databases, (Olaru & Wehenkel, 1999). In a data mining process, input data can be structured, seme-structured, or unstructured. Data can be in text, categorical or numerical values. One of the important characteristics of data mining is its ability to deal data with large volume, distributed, time variant, noisy, and high dimensionality. A large number of data mining algorithms have been developed for different applications. For example, association rules mining can be useful for market basket problems, clustering algorithms can be used to discover trends in unsupervised learning problems, classification algorithms can be applied in decision-making problems, and sequential and time series mining algorithms can be used in predicting events, fault detection, and other supervised learning problems (Vapnik, 1999). Classification is among the most important tasks in the data mining, particularly for data mining applications into engineering fields. Together with regression, classification is mainly for predictive modelling. So far, there have been a number of classification algorithms in practice. According to (Sebastiani, 2002), the main classification algorithms can be categorized as: decision tree and rule based approach such as C4.5 (Quinlan, 1996); probability methods such as Bayesian classifier (Lewis, 1998); on-line methods such as Winnow (Littlestone, 1988) and CVFDT (Hulten 2001), neural networks methods (Rumelhart, Hinton & Wiliams, 1986); example-based methods such as k-nearest neighbors (Duda & Hart, 1973), and SVM (Cortes & Vapnik, 1995). Other important techniques for classification tasks include Associative Classification (Liu et al, 1998) and Ensemble Classification (Tumer, 1996).
Resumo:
This paper presents two case studies that suggest, in different but complementary ways, that the critical tool of frame analysis (Entman, 2002) has a place not only in the analytical environments of critical media research and media studies classes, where it is commonly found, but also in the media-production oriented environments of skills-based journalism training and even the newsroom. The expectations and constraints of both the latter environments, however, necessitate forms of frame analysis that are quick and simple. While commercial pressures mean newsrooms and skills-based journalism-training environments are likely to allow only an oversimplified approach to frame analysis, we argue that even a simple understanding and analysis at the production end could help to shift framing in ways that not only improve the quality and depth of Australasian newspapers' news coverage, but increase reader satisfaction with media output.
Resumo:
Mammalian promoters can be separated into two classes, conserved TATA box-enriched promoters, which initiate at a welldefined site, and more plastic, broad and evolvable CpG-rich promoters. We have sequenced tags corresponding to several hundred thousand transcription start sites (TSSs) in the mouse and human genomes, allowing precise analysis of the sequence architecture and evolution of distinct promoter classes. Different tissues and families of genes differentially use distinct types of promoters. Our tagging methods allow quantitative analysis of promoter usage in different tissues and show that differentially regulated alternative TSSs are a common feature in protein-coding genes and commonly generate alternative N termini. Among the TSSs, we identified new start sites associated with the majority of exons and with 3' UTRs. These data permit genome-scale identification of tissue-specific promoters and analysis of the cis-acting elements associated with them.
Resumo:
Conotoxins are small conformationally constrained peptides found in the venom of marine snails of the genus Conus. They are usually cysteine rich and frequently contain a high degree of post-translational modifications such as C-terminal amidation, hydroxylation, carboxylation, bromination, epimerisation and glycosylation. Here we review the role of NMR in determining the three-dimensional structures of conotoxins and also provide a compilation and analysis of H-1 and C-13 chemical shifts of post-translationally modified amino acids and compare them with data from common amino acids. This analysis provides a reference source for chemical shifts of post-translationally modified amino acids. Copyright (C) 2006 John Wiley & Sons, Ltd.
Resumo:
This paper presents a new multi-depot combined vehicle and crew scheduling algorithm, and uses it, in conjunction with a heuristic vehicle routing algorithm, to solve the intra-city mail distribution problem faced by Australia Post. First we describe the Australia Post mail distribution problem and outline the heuristic vehicle routing algorithm used to find vehicle routes. We present a new multi-depot combined vehicle and crew scheduling algorithm based on set covering with column generation. The paper concludes with a computational investigation examining the affect of different types of vehicle routing solutions on the vehicle and crew scheduling solution, comparing the different levels of integration possible with the new vehicle and crew scheduling algorithm and comparing the results of sequential versus simultaneous vehicle and crew scheduling, using real life data for Australia Post distribution networks.
Resumo:
Formal methods have significant benefits for developing safety critical systems, in that they allow for correctness proofs, model checking safety and liveness properties, deadlock checking, etc. However, formal methods do not scale very well and demand specialist skills, when developing real-world systems. For these reasons, development and analysis of large-scale safety critical systems will require effective integration of formal and informal methods. In this paper, we use such an integrative approach to automate Failure Modes and Effects Analysis (FMEA), a widely used system safety analysis technique, using a high-level graphical modelling notation (Behavior Trees) and model checking. We inject component failure modes into the Behavior Trees and translate the resulting Behavior Trees to SAL code. This enables us to model check if the system in the presence of these faults satisfies its safety properties, specified by temporal logic formulas. The benefit of this process is tool support that automates the tedious and error-prone aspects of FMEA.