791 resultados para Rule-Based Classification
Resumo:
Of the approximately 380 families of angiosperms, representatives of only 10 are known to form symbiotic associations with nitrogen-fixing bacteria in root nodules. The morphologically based classification schemes proposed by taxonomists suggest that many of these 10 families of plants are only distantly related, engendering the hypothesis that the capacity to fix nitrogen evolved independently several, if not many, times. This has in turn influenced attitudes toward the likelihood of transferring genes responsible for symbiotic nitrogen fixation to crop species lacking this ability. Phylogenetic analysis of DNA sequences for the chloroplast gene rbcL indicates, however, that representatives of all 10 families with nitrogen-fixing symbioses occur together, with several families lacking this association, in a single clade. This study therefore indicates that only one lineage of closely related taxa achieved the underlying genetic architecture necessary for symbiotic nitrogen fixation in root nodules.
Resumo:
This paper presents the automatic extension to other languages of TERSEO, a knowledge-based system for the recognition and normalization of temporal expressions originally developed for Spanish. TERSEO was first extended to English through the automatic translation of the temporal expressions. Then, an improved porting process was applied to Italian, where the automatic translation of the temporal expressions from English and from Spanish was combined with the extraction of new expressions from an Italian annotated corpus. Experimental results demonstrate how, while still adhering to the rule-based paradigm, the development of automatic rule translation procedures allowed us to minimize the effort required for porting to new languages. Relying on such procedures, and without any manual effort or previous knowledge of the target language, TERSEO recognizes and normalizes temporal expressions in Italian with good results (72% precision and 83% recall for recognition).
Resumo:
Article
Resumo:
The ongoing consultation process on the European Union Global Strategy (EUGS) presents an occasion for the European Union (EU) to redress the European Security Strategy’s (ESS) shortcomings and update its stance on multilateralism. As rule-based multilateralism remains deeply entrenched in the Union’s DNA, the EUGS is unlikely to represent ground-breaking innovations as to how the EU should act in international affairs. The key challenge in respect of the EU’s multilateralism is twofold. The first challenge lies in setting out clear priorities for the EU’s multilateral action to be pursued collectively by the member states; and the second in determining the form of multilateralism that would best suit the promotion of the priorities concerned. In this collection of six essays, policy analysts and academics are presented with the question: Over a five year horizon, what do you think should be the focus of the EU’s multilateral agenda? The answers dwell on the EU playing a proactive role in relation to emerging powers especially China, and Latin America as a whole; furthering the EU’s soft power through ‘science diplomacy’; and EU leadership in building a global energy and climate community, and counter terrorism measures.
Resumo:
The research work presented in the thesis describes a new methodology for the automated near real-time detection of pipe bursts in Water Distribution Systems (WDSs). The methodology analyses the pressure/flow data gathered by means of SCADA systems in order to extract useful informations that go beyond the simple and usual monitoring type activities and/or regulatory reporting , enabling the water company to proactively manage the WDSs sections. The work has an interdisciplinary nature covering AI techniques and WDSs management processes such as data collection, manipulation and analysis for event detection. Indeed, the methodology makes use of (i) Artificial Neural Network (ANN) for the short-term forecasting of future pressure/flow signal values and (ii) Rule-based Model for bursts detection at sensor and district level. The results of applying the new methodology to a District Metered Area in Emilia- Romagna’s region, Italy have also been reported in the thesis. The results gathered illustrate how the methodology is capable to detect the aforementioned failure events in fast and reliable manner. The methodology guarantees the water companies to save water, energy, money and therefore enhance them to achieve higher levels of operational efficiency, a compliance with the current regulations and, last but not least, an improvement of customer service.
Resumo:
Three experiments are reported that examined the process by which trainees learn decision-making skills during a critical incident training program. Formal theories of category learning were used to identify two processes that may be responsible for the acquisition of decision-making skills: rule learning and exemplar learning. Experiments I and 2 used the process dissociation procedure (L. L. Jacoby, 1998) to evaluate the contribution of these processes to performance. The results suggest that trainees used a mixture of rule and exemplar learning. Furthermore, these learning processes were influenced by different aspects of training structure and design. The goal of Experiment 3 was to develop training techniques that enable trainees to use a rule adaptively. Trainees were tested on cases that represented exceptions to the rule. Unexpectedly, the results suggest that providing general instruction regarding the kinds of conditions in which a decision rule does not apply caused them to fixate on the specific conditions mentioned and impaired their ability to identify other conditions in which the rule might not apply. The theoretical, methodological, and practical implications of the results are discussed.
Resumo:
An approach and strategy for automatic detection of buildings from aerial images using combined image analysis and interpretation techniques is described in this paper. It is undertaken in several steps. A dense DSM is obtained by stereo image matching and then the results of multi-band classification, the DSM, and Normalized Difference Vegetation Index (NDVI) are used to reveal preliminary building interest areas. From these areas, a shape modeling algorithm has been used to precisely delineate their boundaries. The Dempster-Shafer data fusion technique is then applied to detect buildings from the combination of three data sources by a statistically-based classification. A number of test areas, which include buildings of different sizes, shape, and roof color have been investigated. The tests are encouraging and demonstrate that all processes in this system are important for effective building detection.
Resumo:
It is often debated whether migraine with aura (MA) and migraine without aura (MO) are etiologically distinct disorders. A previous study using latent class analysis (LCA) in Australian twins showed no evidence for separate subtypes of MO and MA. The aim of the present study was to replicate these results in a population of Dutch twins and their parents, siblings and partners (N = 10,144). Latent class analysis of International Headache Society (IHS)-based migraine symptoms resulted in the identification of 4 classes: a class of unaffected subjects (class 0), a mild form of nonmigrainous headache (class 1), a moderately severe type of migraine (class 2), typically without neurological symptoms or aura (8% reporting aura symptoms), and a severe type of migraine (class 3), typically with neurological symptoms, and aura symptoms in approximately half of the cases. Given the overlap of neurological symptoms and nonmutual exclusivity of aura symptoms, these results do not support the MO and MA subtypes as being etiologically distinct. The heritability in female twins of migraine based on LCA classification was estimated at .50 (95% confidence intervals [0CI} .27 -.59), similar to IHS-based migraine diagnosis (h(2) = .49, 95% Cl .19-.57). However, using a dichotomous classification (affected-unaffected) decreased heritability for the IHS-based classification (h(2) = .33, 95% Cl .00-.60), but not the LCA-based classification (h(2) = .51, 95% Cl. 23-.61). Importantly, use of the LCA-based classification increased the number of subjects classified as affected. The heritability of the screening question was similar to more detailed LCA and IHS classifications, suggesting that the screening procedure is an important determining factor in genetic studies of migraine.
Resumo:
Music plays an enormous role in today's computer games; it serves to elicit emotion, generate interest and convey important information. Traditional gaming music is fixed at the event level, where tracks loop until a state change is triggered. This behaviour however does not reflect musically the in-game state between these events. We propose a dynamic music environment, where music tracks adjust in real-time to the emotion of the in-game state. We are looking to improve the affective response to symbolic music through the modification of structural and performative characteristics through the application of rule-based techniques. In this paper we undertake a multidiscipline approach, and present a series of primary music-emotion structural rules for implementation. The validity of these rules was tested in small study involving eleven participants, each listening to six permutations from two musical works. Preliminary results indicate that the environment was generally successful in influencing the emotion of the musical works for three of the intended four directions (happier, sadder & content/dreamier). Our secondary aim of establishing that the use of music-emotion rules, sourced predominantly from Western classical music, could be applied with comparable results to modern computer gaming music was also largely successfully.
Resumo:
Music is an immensely powerful affective medium that pervades our everyday life. With ever advancing technology, the reproduction and application of music for emotive and information transfer purposes has never been more prevalent. In this paper we introduce a rule-based engine for influencing the perceived emotions of music. Based on empirical music psychology, we attempt to formalise the relationship between musical elements and their perceived emotion. We examine the modification to structural aspects of music to allow for a graduated transition between perceived emotive states. This engine is intended to provide music reproduction systems with a finer grained control over this affective medium; where perceived musical emotion can be influenced with intent. This intent comes from both an external application and the audience. Using a series of affective computing technologies, an audience’s response metrics and attitudes can be incorporated to model this intent. A generative feedback loop is set up between the external application, the influencing process and the audience’s response to this, which together shape the modification of musical structure. The effectiveness of our rule system for influencing perceived musical emotion was examined in earlier work, with a small test study providing generally encouraging results.
Resumo:
Model transformations are an integral part of model-driven development. Incremental updates are a key execution scenario for transformations in model-based systems, and are especially important for the evolution of such systems. This paper presents a strategy for the incremental maintenance of declarative, rule-based transformation executions. The strategy involves recording dependencies of the transformation execution on information from source models and from the transformation definition. Changes to the source models or the transformation itself can then be directly mapped to their effects on transformation execution, allowing changes to target models to be computed efficiently. This particular approach has many benefits. It supports changes to both source models and transformation definitions, it can be applied to incomplete transformation executions, and a priori knowledge of volatility can be used to further increase the efficiency of change propagation.
Resumo:
This paper presents a framework for compositional verification of Object-Z specifications. Its key feature is a proof rule based on decomposition of hierarchical Object-Z models. For each component in the hierarchy local properties are proven in a single proof step. However, we do not consider components in isolation. Instead, components are envisaged in the context of the referencing super-component and proof steps involve assumptions on properties of the sub-components. The framework is defined for Linear Temporal Logic (LTL)
Resumo:
La riduzione dei consumi di combustibili fossili e lo sviluppo di tecnologie per il risparmio energetico sono una questione di centrale importanza sia per l’industria che per la ricerca, a causa dei drastici effetti che le emissioni di inquinanti antropogenici stanno avendo sull’ambiente. Mentre un crescente numero di normative e regolamenti vengono emessi per far fronte a questi problemi, la necessità di sviluppare tecnologie a basse emissioni sta guidando la ricerca in numerosi settori industriali. Nonostante la realizzazione di fonti energetiche rinnovabili sia vista come la soluzione più promettente nel lungo periodo, un’efficace e completa integrazione di tali tecnologie risulta ad oggi impraticabile, a causa sia di vincoli tecnici che della vastità della quota di energia prodotta, attualmente soddisfatta da fonti fossili, che le tecnologie alternative dovrebbero andare a coprire. L’ottimizzazione della produzione e della gestione energetica d’altra parte, associata allo sviluppo di tecnologie per la riduzione dei consumi energetici, rappresenta una soluzione adeguata al problema, che può al contempo essere integrata all’interno di orizzonti temporali più brevi. L’obiettivo della presente tesi è quello di investigare, sviluppare ed applicare un insieme di strumenti numerici per ottimizzare la progettazione e la gestione di processi energetici che possa essere usato per ottenere una riduzione dei consumi di combustibile ed un’ottimizzazione dell’efficienza energetica. La metodologia sviluppata si appoggia su un approccio basato sulla modellazione numerica dei sistemi, che sfrutta le capacità predittive, derivanti da una rappresentazione matematica dei processi, per sviluppare delle strategie di ottimizzazione degli stessi, a fronte di condizioni di impiego realistiche. Nello sviluppo di queste procedure, particolare enfasi viene data alla necessità di derivare delle corrette strategie di gestione, che tengano conto delle dinamiche degli impianti analizzati, per poter ottenere le migliori prestazioni durante l’effettiva fase operativa. Durante lo sviluppo della tesi il problema dell’ottimizzazione energetica è stato affrontato in riferimento a tre diverse applicazioni tecnologiche. Nella prima di queste è stato considerato un impianto multi-fonte per la soddisfazione della domanda energetica di un edificio ad uso commerciale. Poiché tale sistema utilizza una serie di molteplici tecnologie per la produzione dell’energia termica ed elettrica richiesta dalle utenze, è necessario identificare la corretta strategia di ripartizione dei carichi, in grado di garantire la massima efficienza energetica dell’impianto. Basandosi su un modello semplificato dell’impianto, il problema è stato risolto applicando un algoritmo di Programmazione Dinamica deterministico, e i risultati ottenuti sono stati comparati con quelli derivanti dall’adozione di una più semplice strategia a regole, provando in tal modo i vantaggi connessi all’adozione di una strategia di controllo ottimale. Nella seconda applicazione è stata investigata la progettazione di una soluzione ibrida per il recupero energetico da uno scavatore idraulico. Poiché diversi layout tecnologici per implementare questa soluzione possono essere concepiti e l’introduzione di componenti aggiuntivi necessita di un corretto dimensionamento, è necessario lo sviluppo di una metodologia che permetta di valutare le massime prestazioni ottenibili da ognuna di tali soluzioni alternative. Il confronto fra i diversi layout è stato perciò condotto sulla base delle prestazioni energetiche del macchinario durante un ciclo di scavo standardizzato, stimate grazie all’ausilio di un dettagliato modello dell’impianto. Poiché l’aggiunta di dispositivi per il recupero energetico introduce gradi di libertà addizionali nel sistema, è stato inoltre necessario determinare la strategia di controllo ottimale dei medesimi, al fine di poter valutare le massime prestazioni ottenibili da ciascun layout. Tale problema è stato di nuovo risolto grazie all’ausilio di un algoritmo di Programmazione Dinamica, che sfrutta un modello semplificato del sistema, ideato per lo scopo. Una volta che le prestazioni ottimali per ogni soluzione progettuale sono state determinate, è stato possibile effettuare un equo confronto fra le diverse alternative. Nella terza ed ultima applicazione è stato analizzato un impianto a ciclo Rankine organico (ORC) per il recupero di cascami termici dai gas di scarico di autovetture. Nonostante gli impianti ORC siano potenzialmente in grado di produrre rilevanti incrementi nel risparmio di combustibile di un veicolo, è necessario per il loro corretto funzionamento lo sviluppo di complesse strategie di controllo, che siano in grado di far fronte alla variabilità della fonte di calore per il processo; inoltre, contemporaneamente alla massimizzazione dei risparmi di combustibile, il sistema deve essere mantenuto in condizioni di funzionamento sicure. Per far fronte al problema, un robusto ed efficace modello dell’impianto è stato realizzato, basandosi sulla Moving Boundary Methodology, per la simulazione delle dinamiche di cambio di fase del fluido organico e la stima delle prestazioni dell’impianto. Tale modello è stato in seguito utilizzato per progettare un controllore predittivo (MPC) in grado di stimare i parametri di controllo ottimali per la gestione del sistema durante il funzionamento transitorio. Per la soluzione del corrispondente problema di ottimizzazione dinamica non lineare, un algoritmo basato sulla Particle Swarm Optimization è stato sviluppato. I risultati ottenuti con l’adozione di tale controllore sono stati confrontati con quelli ottenibili da un classico controllore proporzionale integrale (PI), mostrando nuovamente i vantaggi, da un punto di vista energetico, derivanti dall’adozione di una strategia di controllo ottima.
Resumo:
Substantial behavioural and neuropsychological evidence has been amassed to support the dual-route model of morphological processing, which distinguishes between a rule-based system for regular items (walk–walked, call–called) and an associative system for the irregular items (go–went). Some neural-network models attempt to explain the neuropsychological and brain-mapping dissociations in terms of single-system associative processing. We show that there are problems in the accounts of homogeneous networks in the light of recent brain-mapping evidence of systematic double-dissociation. We also examine the superior capabilities of more internally differentiated connectionist models, which, under certain conditions, display systematic double-dissociations. It appears that the more differentiation models show, the more easily they account for dissociation patterns, yet without implementing symbolic computations.
Resumo:
A major application of computers has been to control physical processes in which the computer is embedded within some large physical process and is required to control concurrent physical processes. The main difficulty with these systems is their event-driven characteristics, which complicate their modelling and analysis. Although a number of researchers in the process system community have approached the problems of modelling and analysis of such systems, there is still a lack of standardised software development formalisms for the system (controller) development, particular at early stage of the system design cycle. This research forms part of a larger research programme which is concerned with the development of real-time process-control systems in which software is used to control concurrent physical processes. The general objective of the research in this thesis is to investigate the use of formal techniques in the analysis of such systems at their early stages of development, with a particular bias towards an application to high speed machinery. Specifically, the research aims to generate a standardised software development formalism for real-time process-control systems, particularly for software controller synthesis. In this research, a graphical modelling formalism called Sequential Function Chart (SFC), a variant of Grafcet, is examined. SFC, which is defined in the international standard IEC1131 as a graphical description language, has been used widely in industry and has achieved an acceptable level of maturity and acceptance. A comparative study between SFC and Petri nets is presented in this thesis. To overcome identified inaccuracies in the SFC, a formal definition of the firing rules for SFC is given. To provide a framework in which SFC models can be analysed formally, an extended time-related Petri net model for SFC is proposed and the transformation method is defined. The SFC notation lacks a systematic way of synthesising system models from the real world systems. Thus a standardised approach to the development of real-time process control systems is required such that the system (software) functional requirements can be identified, captured, analysed. A rule-based approach and a method called system behaviour driven method (SBDM) are proposed as a development formalism for real-time process-control systems.