881 resultados para Automatic Generation Control
Resumo:
Over the past decade, several experienced Operational Researchers have advanced the view that the theoretical aspects of model building have raced ahead of the ability of people to use them. Consequently, the impact of Operational Research on commercial organisations and the public sector is limited, and many systems fail to achieve their anticipated benefits in full. The primary objective of this study is to examine a complex interactive Stock Control system, and identify the reasons for the differences between the theoretical expectations and the operational performance. The methodology used is to hypothesise all the possible factors which could cause a divergence between theory and practice, and to evaluate numerically the effect each of these factors has on two main control indices - Service Level and Average Stock Value. Both analytical and empirical methods are used, and simulation is employed extensively. The factors are divided into two main categories for analysis - theoretical imperfections in the model, and the usage of the system by Buyers. No evidence could be found in the literature of any previous attempts to place the differences between theory and practice in a system in quantitative perspective nor, more specifically, to study the effects of Buyer/computer interaction in a Stock Control system. The study reveals that, in general, the human factors influencing performance are of a much higher order of magnitude than the theoretical factors, thus providing objective evidence to support the original premise. The most important finding is that, by judicious intervention into an automatic stock control algorithm, it is possible for Buyers to produce results which not only attain but surpass the algorithmic predictions. However, the complexity and behavioural recalcitrance of these systems are such that an innately numerate, enquiring type of Buyer needs to be inducted to realise the performance potential of the overall man/computer system.
Resumo:
Modelling architectural information is particularly important because of the acknowledged crucial role of software architecture in raising the level of abstraction during development. In the MDE area, the level of abstraction of models has frequently been related to low-level design concepts. However, model-driven techniques can be further exploited to model software artefacts that take into account the architecture of the system and its changes according to variations of the environment. In this paper, we propose model-driven techniques and dynamic variability as concepts useful for modelling the dynamic fluctuation of the environment and its impact on the architecture. Using the mappings from the models to implementation, generative techniques allow the (semi) automatic generation of artefacts making the process more efficient and promoting software reuse. The automatic generation of configurations and reconfigurations from models provides the basis for safer execution. The architectural perspective offered by the models shift focus away from implementation details to the whole view of the system and its runtime change promoting high-level analysis. © 2009 Springer Berlin Heidelberg.
Resumo:
The article presents a new method to automatic generation of help in software. Help generation is realized in the framework of the tool for development and automatic generation of user interfaces based on ontologies. The principal features of the approach are: support for context-sensitive help, automatic generation of help using a task project and an expandable system of help generation.
Resumo:
Bio energy is a renewable energy and a solution to the depleting fossil fuels. Bio energy such as heat, power and bio fuel is generated by conversion technologies using biomass for example domestic waste, root crops, forest residue and animal slurry. Pyrolysis, anaerobic digestion and combined heat and power engine are some examples of the technologies. Depending on the nature of a biomass, it can be treated with various technologies giving out some products, which can be further treated with other technologies and eventually converted into the final products as bio energy. The pathway followed by the biomass, technologies, intermediate products and bio energy in the conversion process is referred to as bio energy pathway. Identification of appropriate pathways optimizes the conversion process. Although there are various approaches to create or generate the pathways, there is still a need for a semantic approach to generate the pathways, which allow checking the consistency of the knowledge, and to share and extend the knowledge efficiently. This paper presents an ontology-based approach to automatic generation of the pathways for biomass to bio energy conversion, which exploits the definition and hierarchical structure of the biomass and technologies, their relationship and associated properties, and infers appropriate pathways. A case study has been carried out in a real-life scenario, the bio energy project for the North West of Europe (Bioen NW), which showed promising results.
Resumo:
Report published in the Proceedings of the National Conference on "Education and Research in the Information Society", Plovdiv, May, 2016
Resumo:
This thesis addressed the problem of risk analysis in mental healthcare, with respect to the GRiST project at Aston University. That project provides a risk-screening tool based on the knowledge of 46 experts, captured as mind maps that describe relationships between risks and patterns of behavioural cues. Mind mapping, though, fails to impose control over content, and is not considered to formally represent knowledge. In contrast, this thesis treated GRiSTs mind maps as a rich knowledge base in need of refinement; that process drew on existing techniques for designing databases and knowledge bases. Identifying well-defined mind map concepts, though, was hindered by spelling mistakes, and by ambiguity and lack of coverage in the tools used for researching words. A novel use of the Edit Distance overcame those problems, by assessing similarities between mind map texts, and between spelling mistakes and suggested corrections. That algorithm further identified stems, the shortest text string found in related word-forms. As opposed to existing approaches’ reliance on built-in linguistic knowledge, this thesis devised a novel, more flexible text-based technique. An additional tool, Correspondence Analysis, found patterns in word usage that allowed machines to determine likely intended meanings for ambiguous words. Correspondence Analysis further produced clusters of related concepts, which in turn drove the automatic generation of novel mind maps. Such maps underpinned adjuncts to the mind mapping software used by GRiST; one such new facility generated novel mind maps, to reflect the collected expert knowledge on any specified concept. Mind maps from GRiST are stored as XML, which suggested storing them in an XML database. In fact, the entire approach here is ”XML-centric”, in that all stages rely on XML as far as possible. A XML-based query language allows user to retrieve information from the mind map knowledge base. The approach, it was concluded, will prove valuable to mind mapping in general, and to detecting patterns in any type of digital information.
Resumo:
Mammography equipment must be evaluated to ensure that images will be of acceptable diagnostic quality with lowest radiation dose. Quality Assurance (QA) aims to provide systematic and constant improvement through a feedback mechanism to address the technical, clinical and training aspects. Quality Control (QC), in relation to mammography equipment, comprises a series of tests to determine equipment performance characteristics. The introduction of digital technologies promoted changes in QC tests and protocols and there are some tests that are specific for each manufacturer. Within each country specifi c QC tests should be compliant with regulatory requirements and guidance. Ideally, one mammography practitioner should take overarching responsibility for QC within a service, with all practitioners having responsibility for actual QC testing. All QC results must be documented to facilitate troubleshooting, internal audit and external assessment. Generally speaking, the practitioner’s role includes performing, interpreting and recording the QC tests as well as reporting any out of action limits to their service lead. They must undertake additional continuous professional development to maintain their QC competencies. They are usually supported by technicians and medical physicists; in some countries the latter are mandatory. Technicians and/or medical physicists often perform many of the tests indicated within this chapter. It is important to recognise that this chapter is an attempt to encompass the main tests performed within European countries. Specific tests related to the service that you work within must be familiarised with and adhered too.
Resumo:
Evolutionary robitics is a branch of artificial intelligence concerned with the automatic generation of autonomous robots. Usually the form of the robit is predefined an various computational techniques are used to control the machine's behaviour. One aspect is the spontaneous generation of walking in legged robots and this can be used to investigate the mechanical requiements for efficient walking in bipeds. This paper demonstrates a bipedal simulator that spontaneously generates walking and running gaits. The model can be customized to represent a range of hominoid morphologies and used to predict performance paramets such as preferred speed and metabolic energy cost. Because it does not require any motion capture data it is particularly suitable for investigating locomotion in fossil animals. The predictoins for modern humans are highly accurate in terms of energy cost for a given speend and thus the values predicted for other bipeds are likely to be good estimates. To illustrate this the cost of transport is calculated for Australopithecus afarensis. The model allows the degree of maximum extension at the knee to be varied causing the model to adopt walking gaits varying from chimpanzee-like to human=like. The energy costs associated with these gait choices can thus be calculated and this information used to evaluate possible locomotor strategies in early hominids
Resumo:
In the last years, digital controllers became a very interesting alternative (low costs and higher accuracy) to the analogue or to hydrodynamic traditional controllers in water supply canal automation, in order to match water supply to water demands. This kind of hydraulic systems needs particular research for control applications because they are big scale systems, open and characterized by big delays and great inertia. This paper presents several digital control modes tested in an experimental canal that will be used as a research platform on the automatic canal control domain. The canal operation and their control modes selection are supervised by a SCADA system developed and configured for this particular canal.
Resumo:
Previous work by Professor John Frazer on Evolutionary Architecture provides a basis for the development of a system evolving architectural envelopes in a generic and abstract manner. Recent research by the authors has focused on the implementation of a virtual environment for the automatic generation and exploration of complex forms and architectural envelopes based on solid modelling techniques and the integration of evolutionary algorithms, enhanced computational and mathematical models. Abstract data types are introduced for genotypes in a genetic algorithm order to develop complex models using generative and evolutionary computing techniques. Multi-objective optimisation techniques are employed for defining the fitness function in the evaluation process.
Resumo:
Nowadays people heavily rely on the Internet for information and knowledge. Wikipedia is an online multilingual encyclopaedia that contains a very large number of detailed articles covering most written languages. It is often considered to be a treasury of human knowledge. It includes extensive hypertext links between documents of the same language for easy navigation. However, the pages in different languages are rarely cross-linked except for direct equivalent pages on the same subject in different languages. This could pose serious difficulties to users seeking information or knowledge from different lingual sources, or where there is no equivalent page in one language or another. In this thesis, a new information retrieval task—cross-lingual link discovery (CLLD) is proposed to tackle the problem of the lack of cross-lingual anchored links in a knowledge base such as Wikipedia. In contrast to traditional information retrieval tasks, cross language link discovery algorithms actively recommend a set of meaningful anchors in a source document and establish links to documents in an alternative language. In other words, cross-lingual link discovery is a way of automatically finding hypertext links between documents in different languages, which is particularly helpful for knowledge discovery in different language domains. This study is specifically focused on Chinese / English link discovery (C/ELD). Chinese / English link discovery is a special case of cross-lingual link discovery task. It involves tasks including natural language processing (NLP), cross-lingual information retrieval (CLIR) and cross-lingual link discovery. To justify the effectiveness of CLLD, a standard evaluation framework is also proposed. The evaluation framework includes topics, document collections, a gold standard dataset, evaluation metrics, and toolkits for run pooling, link assessment and system evaluation. With the evaluation framework, performance of CLLD approaches and systems can be quantified. This thesis contributes to the research on natural language processing and cross-lingual information retrieval in CLLD: 1) a new simple, but effective Chinese segmentation method, n-gram mutual information, is presented for determining the boundaries of Chinese text; 2) a voting mechanism of name entity translation is demonstrated for achieving a high precision of English / Chinese machine translation; 3) a link mining approach that mines the existing link structure for anchor probabilities achieves encouraging results in suggesting cross-lingual Chinese / English links in Wikipedia. This approach was examined in the experiments for better, automatic generation of cross-lingual links that were carried out as part of the study. The overall major contribution of this thesis is the provision of a standard evaluation framework for cross-lingual link discovery research. It is important in CLLD evaluation to have this framework which helps in benchmarking the performance of various CLLD systems and in identifying good CLLD realisation approaches. The evaluation methods and the evaluation framework described in this thesis have been utilised to quantify the system performance in the NTCIR-9 Crosslink task which is the first information retrieval track of this kind.
Resumo:
Modulation and control of a cascade multilevel static synchronous compensator (STATCOM) configuration to improve the quality of voltage generated by wind power systems are presented. The proposed STATCOM configuration needs only four dc-link capacitors and 24 switches to synthesise nine-level operation. In addition to that, switching losses are further reduced by splitting the voltage source inverter of the STATCOM into two units called the `bulk inverter` and the `conditioning inverter`. The high-power bulk inverter is operated at low frequency whereas the low-power conditioning inverter is operated at high frequency to suppress harmonics produced by the bulk inverter. Fluctuations at the point of common coupling voltage, caused by sudden wind changes, are suppressed by controlling reactive power of the STATCOM. Simulation and experimental results are presented to verify the efficacy of the proposed modulation and control techniques used in the STATCOM.
Resumo:
A Z-source inverter based grid-interface for a variable-speed wind turbine connected to a permanent magnet synchronous generator is proposed. A control system is designed to harvest maximum wind energy under varied wind conditions with the use of the permanent magnet synchronous generator, diode-rectifier and Z-source inverter. Control systems for speed regulation of the generator and for DC- and AC- sides of the Z-source inverter are investigated using computer simulations and laboratory experiments. Simulation and experimental results verify the efficacy of the proposed approach.
Resumo:
This paper presents an efficient algorithm for optimizing the operation of battery storage in a low voltage distribution network with a high penetration of PV generation. A predictive control solution is presented that uses wavelet neural networks to predict the load and PV generation at hourly intervals for twelve hours into the future. The load and generation forecast, and the previous twelve hours of load and generation history, is used to assemble load profile. A diurnal charging profile can be compactly represented by a vector of Fourier coefficients allowing a direct search optimization algorithm to be applied. The optimal profile is updated hourly allowing the state of charge profile to respond to changing forecasts in load.