10 resultados para Propagation ontology
em Doria (National Library of Finland DSpace Services) - National Library of Finland, Finland
Resumo:
Selostus: Isästeriilin kevätrypsin geeniaineksen ylläpito in vitro mikroviljelyllä
Resumo:
Combustion of wood is increasing because of the needs of decreasing the emissions of carbon dioxide and the amount of waste going to landfills. Wood based fuels are often scattered on a large area. The transport distances should be short enough to prevent too high costs, and so the size of heating and power plants using wood fuels is often rather small. Combustion technologies of small-size units have to be developed to reach efficient and environmentally friendly energy production. Furnaces that use different packed bed combustion or gasification techniques areoften most economic in small-scale energy production. Ignition front propagation rate affects the stability, heat release rate and emissions of packed bed combustion. Ignition front propagation against airflow in packed beds of wood fuels has been studied. The research has been carried out mainly experimentally. Theoretical aspects have been considered to draw conclusions about the experimental results. The effects of airflow rate, moisture content of the fuel, size, shape and density of particles, and porosity of the bed on the propagation rate of the ignition front have been studied. The experiments were carried out in a pot furnace. The fuels used in the experiments were mainly real wood fuels that are often burned in the production of energy. The fuel types were thin wood chips, saw dust, shavings, wood chips, and pellets with different sizes. Also a few mixturesof the above were tested. Increase in the moisture content of the fuel decreases the propagation rates of the ignition front and makes the range of possible airflow rates narrower because of the energy needed for the evaporation of water and the dilution of volatile gases due to evaporated steam. Increase in the airflow rate increases the ignition rate until a maximum rate of propagation is reached after which it decreases. The maximum flame propagation rate is not always reached in stoichiometric combustion conditions. Increase in particle size and density transfers the optimum airflow rate towards fuel lean conditions. Mixing of small and large particles is often advantageous, because small particles make itpossible to reach the maximum ignition rate in fuel rich conditions, and large particles widen the range of possible airflow rates. A correlation was found forthe maximum rate of ignition front propagation in different wood fuels. According to the correlation, the maximum ignition mass flux is increased when the sphericity of the particles and the porosity of the bed are increased and the moisture content of the fuel is decreased. Another fit was found between sphericity and porosity. Increase in sphericity decreases the porosity of the bed. The reasons of the observed results are discussed.
Resumo:
The present thesis in focused on the minimization of experimental efforts for the prediction of pollutant propagation in rivers by mathematical modelling and knowledge re-use. Mathematical modelling is based on the well known advection-dispersion equation, while the knowledge re-use approach employs the methods of case based reasoning, graphical analysis and text mining. The thesis contribution to the pollutant transport research field consists of: (1) analytical and numerical models for pollutant transport prediction; (2) two novel techniques which enable the use of variable parameters along rivers in analytical models; (3) models for the estimation of pollutant transport characteristic parameters (velocity, dispersion coefficient and nutrient transformation rates) as functions of water flow, channel characteristics and/or seasonality; (4) the graphical analysis method to be used for the identification of pollution sources along rivers; (5) a case based reasoning tool for the identification of crucial information related to the pollutant transport modelling; (6) and the application of a software tool for the reuse of information during pollutants transport modelling research. These support tools are applicable in the water quality research field and in practice as well, as they can be involved in multiple activities. The models are capable of predicting pollutant propagation along rivers in case of both ordinary pollution and accidents. They can also be applied for other similar rivers in modelling of pollutant transport in rivers with low availability of experimental data concerning concentration. This is because models for parameter estimation developed in the present thesis enable the calculation of transport characteristic parameters as functions of river hydraulic parameters and/or seasonality. The similarity between rivers is assessed using case based reasoning tools, and additional necessary information can be identified by using the software for the information reuse. Such systems represent support for users and open up possibilities for new modelling methods, monitoring facilities and for better river water quality management tools. They are useful also for the estimation of environmental impact of possible technological changes and can be applied in the pre-design stage or/and in the practical use of processes as well.
Resumo:
Software plays an important role in our society and economy. Software development is an intricate process, and it comprises many different tasks: gathering requirements, designing new solutions that fulfill these requirements, as well as implementing these designs using a programming language into a working system. As a consequence, the development of high quality software is a core problem in software engineering. This thesis focuses on the validation of software designs. The issue of the analysis of designs is of great importance, since errors originating from designs may appear in the final system. It is considered economical to rectify the problems as early in the software development process as possible. Practitioners often create and visualize designs using modeling languages, one of the more popular being the Uni ed Modeling Language (UML). The analysis of the designs can be done manually, but in case of large systems, the need of mechanisms that automatically analyze these designs arises. In this thesis, we propose an automatic approach to analyze UML based designs using logic reasoners. This approach firstly proposes the translations of the UML based designs into a language understandable by reasoners in the form of logic facts, and secondly shows how to use the logic reasoners to infer the logical consequences of these logic facts. We have implemented the proposed translations in the form of a tool that can be used with any standard compliant UML modeling tool. Moreover, we authenticate the proposed approach by automatically validating hundreds of UML based designs that consist of thousands of model elements available in an online model repository. The proposed approach is limited in scope, but is fully automatic and does not require any expertise of logic languages from the user. We exemplify the proposed approach with two applications, which include the validation of domain specific languages and the validation of web service interfaces.
Resumo:
Negative refractive index materials and propagation of electromagnetic waves in them started to draw attention of scientists not so long ago. This review highlights historically important and recent papers on practical and theoretical aspects related to these issues. Namely, basic properties and peculiarities of such materials related to both their design and wave propagation in them, experimental verification of predictions theoretically made for them, possible practical applications and prospects in this area are considered.
Resumo:
A growing concern for organisations is how they should deal with increasing amounts of collected data. With fierce competition and smaller margins, organisations that are able to fully realize the potential in the data they collect can gain an advantage over the competitors. It is almost impossible to avoid imprecision when processing large amounts of data. Still, many of the available information systems are not capable of handling imprecise data, even though it can offer various advantages. Expert knowledge stored as linguistic expressions is a good example of imprecise but valuable data, i.e. data that is hard to exactly pinpoint to a definitive value. There is an obvious concern among organisations on how this problem should be handled; finding new methods for processing and storing imprecise data are therefore a key issue. Additionally, it is equally important to show that tacit knowledge and imprecise data can be used with success, which encourages organisations to analyse their imprecise data. The objective of the research conducted was therefore to explore how fuzzy ontologies could facilitate the exploitation and mobilisation of tacit knowledge and imprecise data in organisational and operational decision making processes. The thesis introduces both practical and theoretical advances on how fuzzy logic, ontologies (fuzzy ontologies) and OWA operators can be utilized for different decision making problems. It is demonstrated how a fuzzy ontology can model tacit knowledge which was collected from wine connoisseurs. The approach can be generalised and applied also to other practically important problems, such as intrusion detection. Additionally, a fuzzy ontology is applied in a novel consensus model for group decision making. By combining the fuzzy ontology with Semantic Web affiliated techniques novel applications have been designed. These applications show how the mobilisation of knowledge can successfully utilize also imprecise data. An important part of decision making processes is undeniably aggregation, which in combination with a fuzzy ontology provides a promising basis for demonstrating the benefits that one can retrieve from handling imprecise data. The new aggregation operators defined in the thesis often provide new possibilities to handle imprecision and expert opinions. This is demonstrated through both theoretical examples and practical implementations. This thesis shows the benefits of utilizing all the available data one possess, including imprecise data. By combining the concept of fuzzy ontology with the Semantic Web movement, it aspires to show the corporate world and industry the benefits of embracing fuzzy ontologies and imprecision.
Resumo:
This study examines information security as a process (information securing) in terms of what it does, especially beyond its obvious role of protector. It investigates concepts related to ‘ontology of becoming’, and examines what it is that information securing produces. The research is theory driven and draws upon three fields: sociology (especially actor-network theory), philosophy (especially Gilles Deleuze and Félix Guattari’s concept of ‘machine’, ‘territory’ and ‘becoming’, and Michel Serres’s concept of ‘parasite’), and information systems science (the subject of information security). Social engineering (used here in the sense of breaking into systems through non-technical means) and software cracker groups (groups which remove copy protection systems from software) are analysed as examples of breaches of information security. Firstly, the study finds that information securing is always interruptive: every entity (regardless of whether or not it is malicious) that becomes connected to information security is interrupted. Furthermore, every entity changes, becomes different, as it makes a connection with information security (ontology of becoming). Moreover, information security organizes entities into different territories. However, the territories – the insides and outsides of information systems – are ontologically similar; the only difference is in the order of the territories, not in the ontological status of entities that inhabit the territories. In other words, malicious software is ontologically similar to benign software; they both are users in terms of a system. The difference is based on the order of the system and users: who uses the system and what the system is used for. Secondly, the research shows that information security is always external (in the terms of this study it is a ‘parasite’) to the information system that it protects. Information securing creates and maintains order while simultaneously disrupting the existing order of the system that it protects. For example, in terms of software itself, the implementation of a copy protection system is an entirely external addition. In fact, this parasitic addition makes software different. Thus, information security disrupts that which it is supposed to defend from disruption. Finally, it is asserted that, in its interruption, information security is a connector that creates passages; it connects users to systems while also creating its own threats. For example, copy protection systems invite crackers and information security policies entice social engineers to use and exploit information security techniques in a novel manner.
Resumo:
Ontology matching is an important task when data from multiple data sources is integrated. Problems of ontology matching have been studied widely in the researchliterature and many different solutions and approaches have been proposed alsoin commercial software tools. In this survey, well-known approaches of ontologymatching, and its subtype schema matching, are reviewed and compared. The aimof this report is to summarize the knowledge about the state-of-the-art solutionsfrom the research literature, discuss how the methods work on different application domains, and analyze pros and cons of different open source and academic tools inthe commercial world.