787 resultados para expert system, fuzzy logic, pan stage models, supervisory control
Resumo:
Department of Forest Resource Management in the University of Helsinki has in years 2004?2007 carried out so-called SIMO -project to develop a new generation planning system for forest management. Project parties are organisations doing most of Finnish forest planning in government, industry and private owned forests. Aim of this study was to find out the needs and requirements for new forest planning system and to clarify how parties see targets and processes in today's forest planning. Representatives responsible for forest planning in each organisation were interviewed one by one. According to study the stand-based system for managing and treating forests continues in the future. Because of variable data acquisition methods with different accuracy and sources, and development of single tree interpretation, more and more forest data is collected without field work. The benefits of using more specific forest data also calls for use of information units smaller than tree stand. In Finland the traditional way to arrange forest planning computation is divided in two elements. After updating the forest data to present situation every stand unit's growth is simulated with different alternative treatment schedule. After simulation, optimisation selects for every stand one treatment schedule so that the management program satisfies the owner's goals in the best possible way. This arrangement will be maintained in the future system. The parties' requirements to add multi-criteria problem solving, group decision support methods as well as heuristic and spatial optimisation into system make the programming work more challenging. Generally the new system is expected to be adjustable and transparent. Strict documentation and free source code helps to bring these expectations into effect. Variable growing models and treatment schedules with different source information, accuracy, methods and the speed of processing are supposed to work easily in system. Also possibilities to calibrate models regionally and to set local parameters changing in time are required. In future the forest planning system will be integrated in comprehensive data management systems together with geographic, economic and work supervision information. This requires a modular method of implementing the system and the use of a simple data transmission interface between modules and together with other systems. No major differences in parties' view of the systems requirements were noticed in this study. Rather the interviews completed the full picture from slightly different angles. In organisation the forest management is considered quite inflexible and it only draws the strategic lines. It does not yet have a role in operative activity, although the need and benefits of team level forest planning are admitted. Demands and opportunities of variable forest data, new planning goals and development of information technology are known. Party organisations want to keep on track with development. One example is the engagement in extensive SIMO-project which connects the whole field of forest planning in Finland.
Resumo:
The aim of this thesis is to develop a fully automatic lameness detection system that operates in a milking robot. The instrumentation, measurement software, algorithms for data analysis and a neural network model for lameness detection were developed. Automatic milking has become a common practice in dairy husbandry, and in the year 2006 about 4000 farms worldwide used over 6000 milking robots. There is a worldwide movement with the objective of fully automating every process from feeding to milking. Increase in automation is a consequence of increasing farm sizes, the demand for more efficient production and the growth of labour costs. As the level of automation increases, the time that the cattle keeper uses for monitoring animals often decreases. This has created a need for systems for automatically monitoring the health of farm animals. The popularity of milking robots also offers a new and unique possibility to monitor animals in a single confined space up to four times daily. Lameness is a crucial welfare issue in the modern dairy industry. Limb disorders cause serious welfare, health and economic problems especially in loose housing of cattle. Lameness causes losses in milk production and leads to early culling of animals. These costs could be reduced with early identification and treatment. At present, only a few methods for automatically detecting lameness have been developed, and the most common methods used for lameness detection and assessment are various visual locomotion scoring systems. The problem with locomotion scoring is that it needs experience to be conducted properly, it is labour intensive as an on-farm method and the results are subjective. A four balance system for measuring the leg load distribution of dairy cows during milking in order to detect lameness was developed and set up in the University of Helsinki Research farm Suitia. The leg weights of 73 cows were successfully recorded during almost 10,000 robotic milkings over a period of 5 months. The cows were locomotion scored weekly, and the lame cows were inspected clinically for hoof lesions. Unsuccessful measurements, caused by cows standing outside the balances, were removed from the data with a special algorithm, and the mean leg loads and the number of kicks during milking was calculated. In order to develop an expert system to automatically detect lameness cases, a model was needed. A probabilistic neural network (PNN) classifier model was chosen for the task. The data was divided in two parts and 5,074 measurements from 37 cows were used to train the model. The operation of the model was evaluated for its ability to detect lameness in the validating dataset, which had 4,868 measurements from 36 cows. The model was able to classify 96% of the measurements correctly as sound or lame cows, and 100% of the lameness cases in the validation data were identified. The number of measurements causing false alarms was 1.1%. The developed model has the potential to be used for on-farm decision support and can be used in a real-time lameness monitoring system.
Resumo:
A new two-stage state feedback control design approach has been developed to monitor the voltage supplied to magnetorheological (MR) dampers for semi-active vibration control of the benchmark highway bridge. The first stage contains a primary controller, which provides the force required to obtain a desired closed-loop response of the system. In the second stage, an optimal dynamic inversion (ODI) approach has been developed to obtain the amount of voltage to be supplied to each of the MR dampers such that it provides the required force prescribed by the primary controller. ODI is formulated by optimization with dynamic inversion, such that an optimal voltage is supplied to each damper in a set. The proposed control design has been simulated for both phase-I and phase-II study of the recently developed benchmark highway bridge problem. The efficiency of the proposed controller is analyzed in terms of the performance indices defined in the benchmark problem definition. Simulation results demonstrate that the proposed approach generally reduces peak response quantities over those obtained from the sample semi-active controller, although some response quantities have been seen to be increasing. Overall, the proposed control approach is quite competitive as compared with the sample semi-active control approach.
Resumo:
Video surveillance infrastructure has been widely installed in public places for security purposes. However, live video feeds are typically monitored by human staff, making the detection of important events as they occur difficult. As such, an expert system that can automatically detect events of interest in surveillance footage is highly desirable. Although a number of approaches have been proposed, they have significant limitations: supervised approaches, which can detect a specific event, ideally require a large number of samples with the event spatially and temporally localised; while unsupervised approaches, which do not require this demanding annotation, can only detect whether an event is abnormal and not specific event types. To overcome these problems, we formulate a weakly-supervised approach using Kullback-Leibler (KL) divergence to detect rare events. The proposed approach leverages the sparse nature of the target events to its advantage, and we show that this data imbalance guarantees the existence of a decision boundary to separate samples that contain the target event from those that do not. This trait, combined with the coarse annotation used by weakly supervised learning (that only indicates approximately when an event occurs), greatly reduces the annotation burden while retaining the ability to detect specific events. Furthermore, the proposed classifier requires only a decision threshold, simplifying its use compared to other weakly supervised approaches. We show that the proposed approach outperforms state-of-the-art methods on a popular real-world traffic surveillance dataset, while preserving real time performance.
Resumo:
Malli on logiikassa käytetty abstraktio monille matemaattisille objekteille. Esimerkiksi verkot, ryhmät ja metriset avaruudet ovat malleja. Äärellisten mallien teoria on logiikan osa-alue, jossa tarkastellaan logiikkojen, formaalien kielten, ilmaisuvoimaa malleissa, joiden alkioiden lukumäärä on äärellinen. Rajoittuminen äärellisiin malleihin mahdollistaa tulosten soveltamisen teoreettisessa tietojenkäsittelytieteessä, jonka näkökulmasta logiikan kaavoja voidaan ajatella ohjelmina ja äärellisiä malleja niiden syötteinä. Lokaalisuus tarkoittaa logiikan kyvyttömyyttä erottaa toisistaan malleja, joiden paikalliset piirteet vastaavat toisiaan. Väitöskirjassa tarkastellaan useita lokaalisuuden muotoja ja niiden säilymistä logiikkoja yhdistellessä. Kehitettyjä työkaluja apuna käyttäen osoitetaan, että Gaifman- ja Hanf-lokaalisuudeksi kutsuttujen varianttien välissä on lokaalisuuskäsitteiden hierarkia, jonka eri tasot voidaan erottaa toisistaan kasvavaa dimensiota olevissa hiloissa. Toisaalta osoitetaan, että lokaalisuuskäsitteet eivät eroa toisistaan, kun rajoitutaan tarkastelemaan äärellisiä puita. Järjestysinvariantit logiikat ovat kieliä, joissa on käytössä sisäänrakennettu järjestysrelaatio, mutta sitä on käytettävä siten, etteivät kaavojen ilmaisemat asiat riipu valitusta järjestyksestä. Määritelmää voi motivoida tietojenkäsittelyn näkökulmasta: vaikka ohjelman syötteen tietojen järjestyksellä ei olisi odotetun tuloksen kannalta merkitystä, on syöte tietokoneen muistissa aina jossakin järjestyksessä, jota ohjelma voi laskennassaan hyödyntää. Väitöskirjassa tutkitaan minkälaisia lokaalisuuden muotoja järjestysinvariantit ensimmäisen kertaluvun predikaattilogiikan laajennukset yksipaikkaisilla kvanttoreilla voivat toteuttaa. Tuloksia sovelletaan tarkastelemalla, milloin sisäänrakennettu järjestys lisää logiikan ilmaisuvoimaa äärellisissä puissa.
Resumo:
Network data packet capture and replay capabilities are basic requirements for forensic analysis of faults and security-related anomalies, as well as for testing and development. Cyber-physical networks, in which data packets are used to monitor and control physical devices, must operate within strict timing constraints, in order to match the hardware devices' characteristics. Standard network monitoring tools are unsuitable for such systems because they cannot guarantee to capture all data packets, may introduce their own traffic into the network, and cannot reliably reproduce the original timing of data packets. Here we present a high-speed network forensics tool specifically designed for capturing and replaying data traffic in Supervisory Control and Data Acquisition systems. Unlike general-purpose "packet capture" tools it does not affect the observed network's data traffic and guarantees that the original packet ordering is preserved. Most importantly, it allows replay of network traffic precisely matching its original timing. The tool was implemented by developing novel user interface and back-end software for a special-purpose network interface card. Experimental results show a clear improvement in data capture and replay capabilities over standard network monitoring methods and general-purpose forensics solutions.
Resumo:
Whereas it has been widely assumed in the public that the Soviet music policy system had a “top-down” structure of control and command that directly affected musical creativity, in fact my research shows that the relations between the different levels of the music policy system were vague, and the viewpoints of its representatives differed from each other. Because the representatives of the party and government organs controlling operas could not define which kind of music represented Socialist Realism, the system as it developed during the 1930s and 1940s did not function effectively enough in order to create such a centralised control of Soviet music, still less could Soviet operas fulfil the highly ambiguous aesthetics of Socialist Realism. I show that musical discussions developed as bureaucratic ritualistic arenas, where it became more important to reveal the heretical composers, making scapegoats of them, and requiring them to perform self-criticism, than to give directions on how to reach the artistic goals of Socialist Realism. When one opera was found to be unacceptable, this lead to a strengthening of control by the party leadership, which lead to more operas, one after the other, to be revealed as failures. I have studied the control of the composition, staging and reception of the opera case-studies, which remain obscure in the West despite a growing scholarly interest in them, and have created a detailed picture of the foundation and development of the Soviet music control system in 1932-1950. My detailed discussion of such case-studies as Ivan Dzerzhinskii’s The Quiet Don, Dmitrii Shostakovich’s Lady Macbeth of Mtsensk District, Vano Muradeli’s The Great Friendship, Sergei Prokofiev’s Story of a Real Man, Tikhon Khrennikov’s Frol Skobeev and Evgenii Zhukovskii’s From All One’s Heart backs with documentary precision the historically revisionist model of the development of Soviet music. In February 1948, composers belonging to the elite of the Union of Soviet Composers, e.g. Dmitri Shostakovich and Sergei Prokofiev, were accused in a Central Committee Resolution of formalism, as been under the influence of western modernism. Accusations of formalism were connected to the criticism of the conciderable financial, material and social privileges these composers enjoyed in the leadership of the Union. With my new archival findings I give a more detailed picture of the financial background for the 1948 campaign. The independent position of the music funding organization of the Union of Soviet Composers (Muzfond) to decide on its finances was an exceptional phenomenon in the Soviet Union and contradicted the strivings to strengthen the control of Soviet music. The financial audits of the Union of Soviet Composers did not, however, change the elite status of some of its composers, except for maybe a short duration in some cases. At the same time the independence of the significal financial authorities of Soviet theatres was restricted. The cuts in the governmental funding allocated to Soviet theatres contradicted the intensified ideological demands for Soviet operas.
Resumo:
In this paper, pattern classification problem in tool wear monitoring is solved using nature inspired techniques such as Genetic Programming(GP) and Ant-Miner (AM). The main advantage of GP and AM is their ability to learn the underlying data relationships and express them in the form of mathematical equation or simple rules. The extraction of knowledge from the training data set using GP and AM are in the form of Genetic Programming Classifier Expression (GPCE) and rules respectively. The GPCE and AM extracted rules are then applied to set of data in the testing/validation set to obtain the classification accuracy. A major attraction in GP evolved GPCE and AM based classification is the possibility of obtaining an expert system like rules that can be directly applied subsequently by the user in his/her application. The performance of the data classification using GP and AM is as good as the classification accuracy obtained in the earlier study.
Resumo:
The Dissolved Gas Analysis (DGA) a non destructive test procedure, has been in vogue for a long time now, for assessing the status of power and related transformers in service. An early indication of likely internal faults that may exist in Transformers has been seen to be revealed, to a reasonable degree of accuracy by the DGA. The data acquisition and subsequent analysis needs an expert in the concerned area to accurately assess the condition of the equipment. Since the presence of the expert is not always guaranteed, it is incumbent on the part of the power utilities to requisition a well planned and reliable artificial expert system to replace, at least in part, an expert. This paper presents the application of Ordered Ant Mner (OAM) classifier for the prediction of involved fault. Secondly, the paper also attempts to estimate the remaining life of the power transformer as an extension to the elapsed life estimation method suggested in the literature.
Resumo:
We develop several hardware and software simulation blocks for the TinyOS-2 (TOSSIM-T2) simulator. The choice of simulated hardware platform is the popular MICA2 mote. While the hardware simulation elements comprise of radio and external flash memory, the software blocks include an environment noise model, packet delivery model and an energy estimator block for the complete system. The hardware radio block uses the software environment noise model to sample the noise floor. The packet delivery model is built by establishing the SNR-PRR curve for the MICA2 system. The energy estimator block models energy consumption by Micro Controller Unit(MCU), Radio, LEDs, and external flash memory. Using the manufacturerpsilas data sheets we provide an estimate of the energy consumed by the hardware during transmission, reception and also track several of the MCUs states with the associated energy consumption. To study the effectiveness of this work, we take a case study of a paper presented in [1]. We obtain three sets of results for energy consumption through mathematical analysis, simulation using the blocks built into PowerTossim-T2 and finally laboratory measurements. Since there is a significant match between these result sets, we propose our blocks for T2 community to effectively test their application energy requirements and node life times.
Resumo:
This paper considers the dynamic modelling and motion control of a Surface Effect Ship (SES) for safer transfer of personnel and equipment from vessel to-and-from an offshore wind-turbine. The control system designed is referred to as Boarding Control System (BCS). The performance of this system is investigated for a specific wind-farm service vessel—The Wave Craft. On a SES, the pressurized air cushion supports the majority of the weight of the vessel. The control problem considered relates to the actuation of the pressure such that wave-induced vessel motions are minimized. Results are given through simulation, model- and full-scale experimental testing.
Resumo:
Control centers (CC) play a very important role in power system operation. An overall view of the system with information about all existing resources and needs is implemented through SCADA (Supervisory control and data acquisition system) and an EMS (energy management system). As advanced technologies have made their way into the utility environment, the operators are flooded with huge amount of data. The last decade has seen extensive applications of AI techniques, knowledge-based systems, Artificial Neural Networks in this area. This paper focuses on the need for development of an intelligent decision support system to assist the operator in making proper decisions. The requirements for realization of such a system are recognized for the effective operation and energy management of the southern grid in India The application of Petri nets leading to decision support system has been illustrated considering 24 bus system that is a part of southern grid.
Resumo:
In this paper we show the applicability of Ant Colony Optimisation (ACO) techniques for pattern classification problem that arises in tool wear monitoring. In an earlier study, artificial neural networks and genetic programming have been successfully applied to tool wear monitoring problem. ACO is a recent addition to evolutionary computation technique that has gained attention for its ability to extract the underlying data relationships and express them in form of simple rules. Rules are extracted for data classification using training set of data points. These rules are then applied to set of data in the testing/validation set to obtain the classification accuracy. A major attraction in ACO based classification is the possibility of obtaining an expert system like rules that can be directly applied subsequently by the user in his/her application. The classification accuracy obtained in ACO based approach is as good as obtained in other biologically inspired techniques.
Resumo:
The Intelligent Decision Support System (IDSS), also called an expert system, is explained. It was then applied to choose the right composition and firing temperature of a ZnO based varistor. 17 refs.
Resumo:
Production scheduling in a flexible manufacturing system (FMS) is a real-time combinatorial optimization problem that has been proved to be NP-complete. Solving this problem needs on-line monitoring of plan execution and requires real-time decision-making in selecting alternative routings, assigning required resources, and rescheduling when failures occur in the system. Expert systems provide a natural framework for solving this kind of NP-complete problems.In this paper an expert system with a novel parallel heuristic approach is implemented for automatic short-term dynamic scheduling of FMS. The principal features of the expert system presented in this paper include easy rescheduling, on-line plan execution, load balancing, an on-line garbage collection process, and the use of advanced knowledge representational schemes. Its effectiveness is demonstrated with two examples.