986 resultados para depth-first


Relevância:

100.00% 100.00%

Publicador:

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Classification methods are usually used to categorize text documents, such as, Rocchio method, Naïve bayes based method, and SVM based text classification method. These methods learn labeled text documents and then construct classifiers. The generated classifiers can predict which category is located for a new coming text document. The keywords in the document are often used to form rules to categorize text documents, for example “kw = computer” can be a rule for the IT documents category. However, the number of keywords is very large. To select keywords from the large number of keywords is a challenging work. Recently, a rule generation method based on enumeration of all possible keywords combinations has been proposed [2]. In this method, there remains a crucial problem: how to prune irrelevant combinations at the early stages of the rule generation procedure. In this paper, we propose a method than can effectively prune irrelative keywords at an early stage.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

BACKGROUND: Field studies of diuron and its metabolites 3-(3,4-dichlorophenyl)-1-methylurea (DCPMU), 3,4-dichlorophenylurea (DCPU) and 3,4-dichloroaniline (DCA) were conducted in a farm soil and in stream sediments in coastal Queensland, Australia. RESULTS: During a 38 week period after a 1.6 kg ha^-1 diuron application, 70-100% of detected compounds were within 0-15 cm of the farm soil, and 3-10% reached the 30-45 cm depth. First-order t1/2 degradation averaged 49 ± 0.9 days for the 0-15, 0-30 and 0-45 cm soil depths. Farm runoff was collected in the first 13-50 min of episodes lasting 55-90 min. Average concentrations of diuron, DCPU and DCPMU in runoff were 93, 30 and 83-825 µg L^-1 respectively. Their total loading in all runoff was >0.6% of applied diuron. Diuron and DCPMU concentrations in stream sediments were between 3-22 and 4-31 µg kg^-1 soil respectively. The DCPMU/diuron sediment ratio was >1. CONCLUSION: Retention of diuron and its metabolites in farm topsoil indicated their negligible potential for groundwater contamination. Minimal amounts of diuron and DCMPU escaped in farm runoff. This may entail a significant loading into the wider environment at annual amounts of application. The concentrations and ratio of diuron and DCPMU in stream sediments indicated that they had prolonged residence times and potential for accumulation in sediments. The higher ecotoxicity of DCPMU compared with diuron and the combined presence of both compounds in stream sediments suggest that together they would have a greater impact on sensitive aquatic species than as currently apportioned by assessments that are based upon diuron alone.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Because of the bottlenecking operations in a complex coal rail system, millions of dollars are costed by mining companies. To handle this issue, this paper investigates a real-world coal rail system and aims to optimise the coal railing operations under constraints of limited resources (e.g., limited number of locomotives and wagons). In the literature, most studies considered the train scheduling problem on a single-track railway network to be strongly NP-hard and thus developed metaheuristics as the main solution methods. In this paper, a new mathematical programming model is formulated and coded by optimization programming language based on a constraint programming (CP) approach. A new depth-first-search technique is developed and embedded inside the CP model to obtain the optimised coal railing timetable efficiently. Computational experiments demonstrate that high-quality solutions are obtainable in industry-scale applications. To provide insightful decisions, sensitivity analysis is conducted in terms of different scenarios and specific criteria. Keywords Train scheduling · Rail transportation · Coal mining · Constraint programming

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Raportissa on arvioitu ilmastonmuutoksen vaikutusta Suomen maaperän talviaikaiseen jäätymiseen lämpösummien perusteella. Laskelmat kuvaavat roudan paksuutta nimenomaisesti lumettomilla alueilla, esimerkiksi teillä, joilta satanut lumi aurataan pois. Luonnossa lämpöä eristävän lumipeitteen alla routaa on ohuemmin kuin tällaisilla lumettomilla alueilla. Toisaalta luonnollisessa ympäristössä paikalliset erot korostuvat johtuen mm. maalajeista ja kasvillisuudesta. Roudan paksuudet laskettiin ensin perusjakson 1971–2000 ilmasto-oloissa talviaikaisten säähavaintotietoihin pohjautuvien lämpötilojen perusteella. Sen jälkeen laskelmat toistettiin kolmelle tulevalle ajanjaksolle (2010–2039, 2040–2069 ja 2070–2099) kohottamalla lämpötiloja ilmastonmuutosmallien ennustamalla tavalla. Laskelman pohjana käytettiin 19 ilmastomallin A1B-skenaarioajojen keskimäärin simuloimaa lämpötilan muutosta. Tulosten herkkyyden arvioimiseksi joitakin laskelmia tehtiin myös tätä selvästi heikompaa ja voimakkaampaa lämpenemisarviota käyttäen. A1B-skenaarion mukaisen lämpötilan nousun toteutuessa nykyisiä mallituloksia vastaavasti routakerros ohenee sadan vuoden aikana Pohjois-Suomessa 30–40 %, suuressa osassa maan keski- ja eteläosissa 50–70 %. Jo lähivuosikymmeninä roudan ennustetaan ohentuvan 10–30 %, saaristossa enemmän. Mikäli lämpeneminen toteutuisi voimakkaimman tarkastellun vaihtoehdon mukaisesti, roudan syvyys pienenisi tätäkin enemmän. Roudan paksuuden vuosienvälistä vaihtelua ja sen muuttumista tulevaisuudessa pyrittiin myös arvioimaan. Leutoina talvina routa ohenee enemmän kuin normaaleina tai ankarina pakkastalvina. Päivittäistä sään vaihtelua simuloineen säägeneraattorin tuottamassa aineistoissa esiintyi kuitenkin liian vähän hyvin alhaisia ja hyvin korkeita lämpötiloja. Siksi näitten lämpötilatietojen pohjalta laskettu roudan paksuuskin ilmeisesti vaihtelee liian vähän vuodesta toiseen. Kelirikkotilanteita voi esiintyä myös kesken routakauden, jos useamman päivän suojasää ja samanaikainen runsas vesisade pääsevät sulattamaan maata. Tällaiset routakauden aikana sattuvat säätilat näyttävätkin yleistyvän lähivuosikymmeninä. Vuosisadan loppua kohti ne sen sijaan maan eteläosissa jälleen vähenevät, koska routakausi lyhenee oleellisesti. Tulevia vuosikymmeniä koskevien ilmastonmuutosennusteiden ohella routaa ja kelirikon esiintymistä on periaatteessa mahdollista ennustaa myös lähiaikojen sääennusteita hyödyntäen. Pitkät, viikkojen tai kuukausien mittaiset sääennusteet eivät tosin ole ainakaan vielä erityisen luotettavia, mutta myös lyhyemmistä ennusteista voisi olla hyötyä mm. tienpitoa suunniteltaessa.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This paper looks at the complexity of four different incremental problems. The following are the problems considered: (1) Interval partitioning of a flow graph (2) Breadth first search (BFS) of a directed graph (3) Lexicographic depth first search (DFS) of a directed graph (4) Constructing the postorder listing of the nodes of a binary tree. The last problem arises out of the need for incrementally computing the Sethi-Ullman (SU) ordering [1] of the subtrees of a tree after it has undergone changes of a given type. These problems are among those that claimed our attention in the process of our designing algorithmic techniques for incremental code generation. BFS and DFS have certainly numerous other applications, but as far as our work is concerned, incremental code generation is the common thread linking these problems. The study of the complexity of these problems is done from two different perspectives. In [2] is given the theory of incremental relative lower bounds (IRLB). We use this theory to derive the IRLBs of the first three problems. Then we use the notion of a bounded incremental algorithm [4] to prove the unboundedness of the fourth problem with respect to the locally persistent model of computation. Possibly, the lower bound result for lexicographic DFS is the most interesting. In [5] the author considers lexicographic DFS to be a problem for which the incremental version may require the recomputation of the entire solution from scratch. In that sense, our IRLB result provides further evidence for this possibility with the proviso that the incremental DFS algorithms considered be ones that do not require too much of preprocessing.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Frequent episode discovery is a popular framework for pattern discovery from sequential data. It has found many applications in domains like alarm management in telecommunication networks, fault analysis in the manufacturing plants, predicting user behavior in web click streams and so on. In this paper, we address the discovery of serial episodes. In the episodes context, there have been multiple ways to quantify the frequency of an episode. Most of the current algorithms for episode discovery under various frequencies are apriori-based level-wise methods. These methods essentially perform a breadth-first search of the pattern space. However currently there are no depth-first based methods of pattern discovery in the frequent episode framework under many of the frequency definitions. In this paper, we try to bridge this gap. We provide new depth-first based algorithms for serial episode discovery under non-overlapped and total frequencies. Under non-overlapped frequency, we present algorithms that can take care of span constraint and gap constraint on episode occurrences. Under total frequency we present an algorithm that can handle span constraint. We provide proofs of correctness for the proposed algorithms. We demonstrate the effectiveness of the proposed algorithms by extensive simulations. We also give detailed run-time comparisons with the existing apriori-based methods and illustrate scenarios under which the proposed pattern-growth algorithms perform better than their apriori counterparts. (C) 2013 Elsevier B.V. All rights reserved.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Construction of high rate Space Time Block Codes (STBCs) with low decoding complexity has been studied widely using techniques such as sphere decoding and non Maximum-Likelihood (ML) decoders such as the QR decomposition decoder with M paths (QRDM decoder). Recently Ren et al., presented a new class of STBCs known as the block orthogonal STBCs (BOSTBCs), which could be exploited by the QRDM decoders to achieve significant decoding complexity reduction without performance loss. The block orthogonal property of the codes constructed was however only shown via simulations. In this paper, we give analytical proofs for the block orthogonal structure of various existing codes in literature including the codes constructed in the paper by Ren et al. We show that codes formed as the sum of Clifford Unitary Weight Designs (CUWDs) or Coordinate Interleaved Orthogonal Designs (CIODs) exhibit block orthogonal structure. We also provide new construction of block orthogonal codes from Cyclic Division Algebras (CDAs) and Crossed-Product Algebras (CPAs). In addition, we show how the block orthogonal property of the STBCs can be exploited to reduce the decoding complexity of a sphere decoder using a depth first search approach. Simulation results of the decoding complexity show a 30% reduction in the number of floating point operations (FLOPS) of BOSTBCs as compared to STBCs without the block orthogonal structure.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

An expert system for the elucidation of the structures of organic compounds-ESESOC-II has been designed. It is composed of three parts: spectroscopic data analysis, structure generator, and evaluation of the candidate structures. The heart of ESESOC is the structure generator, as an integral part, which accepts the specific types of information, e.g. molecular formulae, substructure constraints, and produces an exhaustive and irredundant list of candidate structures. The scheme for the structural generation is given, in which the depth-first search strategy is used to fill the bonding adjacency matrix (BAM) and a new method is introduced to remove the duplicates.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The problem of discovering frequent arrangements of temporal intervals is studied. It is assumed that the database consists of sequences of events, where an event occurs during a time-interval. The goal is to mine temporal arrangements of event intervals that appear frequently in the database. The motivation of this work is the observation that in practice most events are not instantaneous but occur over a period of time and different events may occur concurrently. Thus, there are many practical applications that require mining such temporal correlations between intervals including the linguistic analysis of annotated data from American Sign Language as well as network and biological data. Two efficient methods to find frequent arrangements of temporal intervals are described; the first one is tree-based and uses depth first search to mine the set of frequent arrangements, whereas the second one is prefix-based. The above methods apply efficient pruning techniques that include a set of constraints consisting of regular expressions and gap constraints that add user-controlled focus into the mining process. Moreover, based on the extracted patterns a standard method for mining association rules is employed that applies different interestingness measures to evaluate the significance of the discovered patterns and rules. The performance of the proposed algorithms is evaluated and compared with other approaches on real (American Sign Language annotations and network data) and large synthetic datasets.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Incidence calculus is a mechanism for probabilistic reasoning in which sets of possible worlds, called incidences, are associated with axioms, and probabilities are then associated with these sets. Inference rules are used to deduce bounds on the incidence of formulae which are not axioms, and bounds for the probability of such a formula can then be obtained. In practice an assignment of probabilities directly to axioms may be given, and it is then necessary to find an assignment of incidence which will reproduce these probabilities. We show that this task of assigning incidences can be viewed as a tree searching problem, and two techniques for performing this research are discussed. One of these is a new proposal involving a depth first search, while the other incorporates a random element. A Prolog implementation of these methods has been developed. The two approaches are compared for efficiency and the significance of their results are discussed. Finally we discuss a new proposal for applying techniques from linear programming to incidence calculus.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The increased construction and reconstruction of smart substations has exposed a problem with version management of substation configuration description language (SCL) files due to frequent changes. This paper proposes a comparative approach for differentiation of smart substation SCL configuration files. A comparison model for SCL configuration files is built in this method, which is based on the SCL structure and abstract model defined by IEC 61850. The proposed approach adopts the algorithms of depth-first traversal, sorting, and cross comparison in order to rapidly identify differences of changed SCL configuration files. This approach can also be utilized to detect malicious tampering or illegal manipulation tailoring for SCL files. SCL comparison software is developed using the Qt platform to validate the feasibility and effectiveness of the proposed approach.