19 resultados para Geometry of Fuzzy sets
em Doria (National Library of Finland DSpace Services) - National Library of Finland, Finland
Resumo:
This thesis presents a topological approach to studying fuzzy setsby means of modifier operators. Modifier operators are mathematical models, e.g., for hedges, and we present briefly different approaches to studying modifier operators. We are interested in compositional modifier operators, modifiers for short, and these modifiers depend on binary relations. We show that if a modifier depends on a reflexive and transitive binary relation on U, then there exists a unique topology on U such that this modifier is the closure operator in that topology. Also, if U is finite then there exists a lattice isomorphism between the class of all reflexive and transitive relations and the class of all topologies on U. We define topological similarity relation "≈" between L-fuzzy sets in an universe U, and show that the class LU/ ≈ is isomorphic with the class of all topologies on U, if U is finite and L is suitable. We consider finite bitopological spaces as approximation spaces, and we show that lower and upper approximations can be computed by means of α-level sets also in the case of equivalence relations. This means that approximations in the sense of Rough Set Theory can be computed by means of α-level sets. Finally, we present and application to data analysis: we study an approach to detecting dependencies of attributes in data base-like systems, called information systems.
Resumo:
Fuzzy set theory and Fuzzy logic is studied from a mathematical point of view. The main goal is to investigatecommon mathematical structures in various fuzzy logical inference systems and to establish a general mathematical basis for fuzzy logic when considered as multi-valued logic. The study is composed of six distinct publications. The first paper deals with Mattila'sLPC+Ch Calculus. THis fuzzy inference system is an attempt to introduce linguistic objects to mathematical logic without defining these objects mathematically.LPC+Ch Calculus is analyzed from algebraic point of view and it is demonstratedthat suitable factorization of the set of well formed formulae (in fact, Lindenbaum algebra) leads to a structure called ET-algebra and introduced in the beginning of the paper. On its basis, all the theorems presented by Mattila and many others can be proved in a simple way which is demonstrated in the Lemmas 1 and 2and Propositions 1-3. The conclusion critically discusses some other issues of LPC+Ch Calculus, specially that no formal semantics for it is given.In the second paper the characterization of solvability of the relational equation RoX=T, where R, X, T are fuzzy relations, X the unknown one, and o the minimum-induced composition by Sanchez, is extended to compositions induced by more general products in the general value lattice. Moreover, the procedure also applies to systemsof equations. In the third publication common features in various fuzzy logicalsystems are investigated. It turns out that adjoint couples and residuated lattices are very often present, though not always explicitly expressed. Some minor new results are also proved.The fourth study concerns Novak's paper, in which Novak introduced first-order fuzzy logic and proved, among other things, the semantico-syntactical completeness of this logic. He also demonstrated that the algebra of his logic is a generalized residuated lattice. In proving that the examination of Novak's logic can be reduced to the examination of locally finite MV-algebras.In the fifth paper a multi-valued sentential logic with values of truth in an injective MV-algebra is introduced and the axiomatizability of this logic is proved. The paper developes some ideas of Goguen and generalizes the results of Pavelka on the unit interval. Our proof for the completeness is purely algebraic. A corollary of the Completeness Theorem is that fuzzy logic on the unit interval is semantically complete if, and only if the algebra of the valuesof truth is a complete MV-algebra. The Compactness Theorem holds in our well-defined fuzzy sentential logic, while the Deduction Theorem and the Finiteness Theorem do not. Because of its generality and good-behaviour, MV-valued logic can be regarded as a mathematical basis of fuzzy reasoning. The last paper is a continuation of the fifth study. The semantics and syntax of fuzzy predicate logic with values of truth in ana injective MV-algerba are introduced, and a list of universally valid sentences is established. The system is proved to be semanticallycomplete. This proof is based on an idea utilizing some elementary properties of injective MV-algebras and MV-homomorphisms, and is purely algebraic.
Resumo:
Since its introduction, fuzzy set theory has become a useful tool in the mathematical modelling of problems in Operations Research and many other fields. The number of applications is growing continuously. In this thesis we investigate a special type of fuzzy set, namely fuzzy numbers. Fuzzy numbers (which will be considered in the thesis as possibility distributions) have been widely used in quantitative analysis in recent decades. In this work two measures of interactivity are defined for fuzzy numbers, the possibilistic correlation and correlation ratio. We focus on both the theoretical and practical applications of these new indices. The approach is based on the level-sets of the fuzzy numbers and on the concept of the joint distribution of marginal possibility distributions. The measures possess similar properties to the corresponding probabilistic correlation and correlation ratio. The connections to real life decision making problems are emphasized focusing on the financial applications. We extend the definitions of possibilistic mean value, variance, covariance and correlation to quasi fuzzy numbers and prove necessary and sufficient conditions for the finiteness of possibilistic mean value and variance. The connection between the concepts of probabilistic and possibilistic correlation is investigated using an exponential distribution. The use of fuzzy numbers in practical applications is demonstrated by the Fuzzy Pay-Off method. This model for real option valuation is based on findings from earlier real option valuation models. We illustrate the use of number of different types of fuzzy numbers and mean value concepts with the method and provide a real life application.
Resumo:
Keyhole welding, meaning that the laser beam forms a vapour cavity inside the steel, is one of the two types of laser welding processes and currently it is used in few industrial applications. Modern high power solid state lasers are becoming more used generally, but not all process fundamentals and phenomena of the process are well known and understanding of these helps to improve quality of final products. This study concentrates on the process fundamentals and the behaviour of the keyhole welding process by the means of real time high speed x-ray videography. One of the problem areas in laser welding has been mixing of the filler wire into the weld; the phenomena are explained and also one possible solution for this problem is presented in this study. The argument of this thesis is that the keyhole laser welding process has three keyhole modes that behave differently. These modes are trap, cylinder and kaleidoscope. Two of these have sub-modes, in which the keyhole behaves similarly but the molten pool changes behaviour and geometry of the resulting weld is different. X-ray videography was used to visualize the actual keyhole side view profile during the welding process. Several methods were applied to analyse and compile high speed x-ray video data to achieve a clearer image of the keyhole side view. Averaging was used to measure the keyhole side view outline, which was used to reconstruct a 3D-model of the actual keyhole. This 3D-model was taken as basis for calculation of the vapour volume inside of the keyhole for each laser parameter combination and joint geometry. Four different joint geometries were tested, partial penetration bead on plate and I-butt joint and full penetration bead on plate and I-butt joint. The comparison was performed with selected pairs and also compared all combinations together.
Resumo:
Subshifts are sets of configurations over an infinite grid defined by a set of forbidden patterns. In this thesis, we study two-dimensional subshifts offinite type (2D SFTs), where the underlying grid is Z2 and the set of for-bidden patterns is finite. We are mainly interested in the interplay between the computational power of 2D SFTs and their geometry, examined through the concept of expansive subdynamics. 2D SFTs with expansive directions form an interesting and natural class of subshifts that lie between dimensions 1 and 2. An SFT that has only one non-expansive direction is called extremely expansive. We prove that in many aspects, extremely expansive 2D SFTs display the totality of behaviours of general 2D SFTs. For example, we construct an aperiodic extremely expansive 2D SFT and we prove that the emptiness problem is undecidable even when restricted to the class of extremely expansive 2D SFTs. We also prove that every Medvedev class contains an extremely expansive 2D SFT and we provide a characterization of the sets of directions that can be the set of non-expansive directions of a 2D SFT. Finally, we prove that for every computable sequence of 2D SFTs with an expansive direction, there exists a universal object that simulates all of the elements of the sequence. We use the so called hierarchical, self-simulating or fixed-point method for constructing 2D SFTs which has been previously used by Ga´cs, Durand, Romashchenko and Shen.
Resumo:
This thesis gives an overview of the validation process for thermal hydraulic system codes and it presents in more detail the assessment and validation of the French code CATHARE for VVER calculations. Three assessment cases are presented: loop seal clearing, core reflooding and flow in a horizontal steam generator. The experience gained during these assessment and validation calculations has been used to analyze the behavior of the horizontal steam generator and the natural circulation in the geometry of the Loviisa nuclear power plant. The cases presented are not exhaustive, but they give a good overview of the work performed by the personnel of Lappeenranta University of Technology (LUT). Large part of the work has been performed in co-operation with the CATHARE-team in Grenoble, France. The design of a Russian type pressurized water reactor, VVER, differs from that of a Western-type PWR. Most of thermal-hydraulic system codes are validated only for the Western-type PWRs. Thus, the codes should be assessed and validated also for VVER design in order to establish any weaknesses in the models. This information is needed before codes can be used for the safety analysis. Theresults of the assessment and validation calculations presented here show that the CATHARE code can be used also for the thermal-hydraulic safety studies for VVER type plants. However, some areas have been indicated which need to be reassessed after further experimental data become available. These areas are mostly connected to the horizontal stem generators, like condensation and phase separation in primary side tubes. The work presented in this thesis covers a large numberof the phenomena included in the CSNI code validation matrices for small and intermediate leaks and for transients. Also some of the phenomena included in the matrix for large break LOCAs are covered. The matrices for code validation for VVER applications should be used when future experimental programs are planned for code validation.
Resumo:
Over 70% of the total costs of an end product are consequences of decisions that are made during the design process. A search for optimal cross-sections will often have only a marginal effect on the amount of material used if the geometry of a structure is fixed and if the cross-sectional characteristics of its elements are property designed by conventional methods. In recent years, optimalgeometry has become a central area of research in the automated design of structures. It is generally accepted that no single optimisation algorithm is suitable for all engineering design problems. An appropriate algorithm, therefore, mustbe selected individually for each optimisation situation. Modelling is the mosttime consuming phase in the optimisation of steel and metal structures. In thisresearch, the goal was to develop a method and computer program, which reduces the modelling and optimisation time for structural design. The program needed anoptimisation algorithm that is suitable for various engineering design problems. Because Finite Element modelling is commonly used in the design of steel and metal structures, the interaction between a finite element tool and optimisation tool needed a practical solution. The developed method and computer programs were tested with standard optimisation tests and practical design optimisation cases. Three generations of computer programs are developed. The programs combine anoptimisation problem modelling tool and FE-modelling program using three alternate methdos. The modelling and optimisation was demonstrated in the design of a new boom construction and steel structures of flat and ridge roofs. This thesis demonstrates that the most time consuming modelling time is significantly reduced. Modelling errors are reduced and the results are more reliable. A new selection rule for the evolution algorithm, which eliminates the need for constraint weight factors is tested with optimisation cases of the steel structures that include hundreds of constraints. It is seen that the tested algorithm can be used nearly as a black box without parameter settings and penalty factors of the constraints.
Resumo:
It is commonly observed that complex fabricated structures subject tofatigue loading fail at the welded joints. Some problems can be corrected by proper detail design but fatigue performance can also be improved using post-weld improvement methods. In general, improvement methods can be divided into two main groups: weld geometry modification methods and residual stress modification methods. The former remove weld toe defects and/or reduce the stress concentrationwhile the latter introduce compressive stress fields in the area where fatigue cracks are likely to initiate. Ultrasonic impact treatment (UIT) is a novel post-weld treatment method that influences both the residual stress distribution andimproves the local geometry of the weld. The structural fatigue strength of non-load carrying attachments in the as-welded condition has been experimentally compared to the structural fatigue strength of ultrasonic impact treated welds. Longitudinal attachment specimens made of two thicknesses of steel S355 J0 have been tested for determining the efficiency of ultrasonic impacttreatment. Treated welds were found to have about 50% greater structural fatigue strength, when the slope of the S-N-curve is three. High mean stress fatigue testing based on the Ohta-method decreased the degree of weld improvement only 19%. This indicated that the method could be also applied for large fabricated structures operating under high reactive residual stresses equilibrated within the volume of the structure. The thickness of specimens has no significant effect tothe structural fatigue strength. The fatigue class difference between 5 mm and 8 mm specimen was only 8%. It was hypothesized that the UIT method added a significant crack initiation period to the total fatigue life of the welded joints. Crack initiation life was estimated by a local strain approach. Material parameters were defined using a modified Uniform Material Law developed in Germany. Finite element analysis and X-ray diffraction were used to define, respectively, the stress concentration and mean stress. The theoretical fatigue life was found to have good accuracy comparing to experimental fatigue tests.The predictive behaviour of the local strain approach combined with the uniformmaterial law was excellent for the joint types and conditions studied in this work.
Resumo:
In metallurgic plants a high quality metal production is always required. Nowadays soft computing applications are more often used for automation of manufacturing process and quality control instead of mechanical techniques. In this thesis an overview of soft computing methods presents. As an example of soft computing application, an effective model of fuzzy expert system for the automotive quality control of steel degassing process was developed. The purpose of this work is to describe the fuzzy relations as quality hypersurfaces by varying number of linguistic variables and fuzzy sets.
Resumo:
The objective of this study is to show that bone strains due to dynamic mechanical loading during physical activity can be analysed using the flexible multibody simulation approach. Strains within the bone tissue play a major role in bone (re)modeling. Based on previous studies, it has been shown that dynamic loading seems to be more important for bone (re)modeling than static loading. The finite element method has been used previously to assess bone strains. However, the finite element method may be limited to static analysis of bone strains due to the expensive computation required for dynamic analysis, especially for a biomechanical system consisting of several bodies. Further, in vivo implementation of strain gauges on the surfaces of bone has been used previously in order to quantify the mechanical loading environment of the skeleton. However, in vivo strain measurement requires invasive methodology, which is challenging and limited to certain regions of superficial bones only, such as the anterior surface of the tibia. In this study, an alternative numerical approach to analyzing in vivo strains, based on the flexible multibody simulation approach, is proposed. In order to investigate the reliability of the proposed approach, three 3-dimensional musculoskeletal models where the right tibia is assumed to be flexible, are used as demonstration examples. The models are employed in a forward dynamics simulation in order to predict the tibial strains during walking on a level exercise. The flexible tibial model is developed using the actual geometry of the subject’s tibia, which is obtained from 3 dimensional reconstruction of Magnetic Resonance Images. Inverse dynamics simulation based on motion capture data obtained from walking at a constant velocity is used to calculate the desired contraction trajectory for each muscle. In the forward dynamics simulation, a proportional derivative servo controller is used to calculate each muscle force required to reproduce the motion, based on the desired muscle contraction trajectory obtained from the inverse dynamics simulation. Experimental measurements are used to verify the models and check the accuracy of the models in replicating the realistic mechanical loading environment measured from the walking test. The predicted strain results by the models show consistency with literature-based in vivo strain measurements. In conclusion, the non-invasive flexible multibody simulation approach may be used as a surrogate for experimental bone strain measurement, and thus be of use in detailed strain estimation of bones in different applications. Consequently, the information obtained from the present approach might be useful in clinical applications, including optimizing implant design and devising exercises to prevent bone fragility, accelerate fracture healing and reduce osteoporotic bone loss.
Resumo:
In this thesis I argue that the psychological study of concepts and categorisation, and the philosophical study of reference are deeply intertwined. I propose that semantic intuitions are a variety of categorisation judgements, determined by concepts, and that because of this, concepts determine reference. I defend a dual theory of natural kind concepts, according to which natural kind concepts have distinct semantic cores and non-semantic identification procedures. Drawing on psychological essentialism, I suggest that the cores consist of externalistic placeholder essence beliefs. The identification procedures, in turn, consist of prototypes, sets of exemplars, or possibly also theory-structured beliefs. I argue that the dual theory is motivated both by experimental data and theoretical considerations. The thesis consists of three interrelated articles. Article I examines philosophical causal and description theories of natural kind term reference, and argues that they involve, or need to involve, certain psychological elements. I propose a unified theory of natural kind term reference, built on the psychology of concepts. Article II presents two semantic adaptations of psychological essentialism, one of which is a strict externalistic Kripkean-Putnamian theory, while the other is a hybrid account, according to which natural kind terms are ambiguous between internalistic and externalistic senses. We present two experiments, the results of which support the strict externalistic theory. Article III examines Fodor’s influential atomistic theory of concepts, according to which no psychological capacities associated with concepts constitute them, or are necessary for reference. I argue, contra Fodor, that the psychological mechanisms are necessary for reference.
Resumo:
This study focuses to the intersection of three sets of activities in a company: expert work, development work and supply chain management, SCM. Experts and expert work represent a set of individuals whose efficiency and impact this study is intended to improve, while development work defines the set of organizational activities to focus on. SCM as an expertise area acts as the platform on which this study is built. The study has two aims. Firstly, it aims to derive a model helping an SCM expert to increase the effectiveness of expert work in development tasks by understanding the encountered organizational situations and processes better, reflecting his/her past and future actions to organizational processes and selecting and adjusting the processes and contents of his/her work accordingly. Secondly, it aims to develop applicable approaches and methods to understand, evaluate and manage the organizational processes and situations in development work. The integrative model on approaches and methods to improve the effectiveness of development processes is split to two aggregate dimensions: technical performance of the developed solution and consumption of resources of the development process. Six potential approaches and methods aiming at helping in the management of organizational dimensions are presented in enclosed publications. The approaches focus on three subtasks of development work: decision making, implementation and change, and knowledge accumulation. The approaches and methods have been tested in case studies representing typical development processes in the area of supply chain management. As a result, four suggestions are presented. Firstly, SCM experts are advised to consider the SCM development work to be consisting of development processes. Secondly, inside these processes they should identify and evaluate the risk of difficult decision-making related to organizational factors. Thirdly, they are prompted for an active role in implementation and change, supporting the implementation through whole process. Finally, the development should be seen in a holistic view, taking into account the stage of knowledge and organizational issues related to it, and adopt a knowledge development strategy.
Resumo:
This study combines several projects related to the flows in vessels with complex shapes representing different chemical apparata. Three major cases were studied. The first one is a two-phase plate reactor with a complex structure of intersecting micro channels engraved on one plate which is covered by another plain plate. The second case is a tubular microreactor, consisting of two subcases. The first subcase is a multi-channel two-component commercial micromixer (slit interdigital) used to mix two liquid reagents before they enter the reactor. The second subcase is a micro-tube, where the distribution of the heat generated by the reaction was studied. The third case is a conventionally packed column. However, flow, reactions or mass transfer were not modeled. Instead, the research focused on how to describe mathematically the realistic geometry of the column packing, which is rather random and can not be created using conventional computeraided design or engineering (CAD/CAE) methods. Several modeling approaches were used to describe the performance of the processes in the considered vessels. Computational fluid dynamics (CFD) was used to describe the details of the flow in the plate microreactor and micromixer. A space-averaged mass transfer model based on Fick’s law was used to describe the exchange of the species through the gas-liquid interface in the microreactor. This model utilized data, namely the values of the interfacial area, obtained by the corresponding CFD model. A common heat transfer model was used to find the heat distribution in the micro-tube. To generate the column packing, an additional multibody dynamic model was implemented. Auxiliary simulation was carried out to determine the position and orientation of every packing element in the column. This data was then exported into a CAD system to generate desirable geometry, which could further be used for CFD simulations. The results demonstrated that the CFD model of the microreactor could predict the flow pattern well enough and agreed with experiments. The mass transfer model allowed to estimate the mass transfer coefficient. Modeling for the second case showed that the flow in the micromixer and the heat transfer in the tube could be excluded from the larger model which describes the chemical kinetics in the reactor. Results of the third case demonstrated that the auxiliary simulation could successfully generate complex random packing not only for the column but also for other similar cases.
Resumo:
A growing concern for organisations is how they should deal with increasing amounts of collected data. With fierce competition and smaller margins, organisations that are able to fully realize the potential in the data they collect can gain an advantage over the competitors. It is almost impossible to avoid imprecision when processing large amounts of data. Still, many of the available information systems are not capable of handling imprecise data, even though it can offer various advantages. Expert knowledge stored as linguistic expressions is a good example of imprecise but valuable data, i.e. data that is hard to exactly pinpoint to a definitive value. There is an obvious concern among organisations on how this problem should be handled; finding new methods for processing and storing imprecise data are therefore a key issue. Additionally, it is equally important to show that tacit knowledge and imprecise data can be used with success, which encourages organisations to analyse their imprecise data. The objective of the research conducted was therefore to explore how fuzzy ontologies could facilitate the exploitation and mobilisation of tacit knowledge and imprecise data in organisational and operational decision making processes. The thesis introduces both practical and theoretical advances on how fuzzy logic, ontologies (fuzzy ontologies) and OWA operators can be utilized for different decision making problems. It is demonstrated how a fuzzy ontology can model tacit knowledge which was collected from wine connoisseurs. The approach can be generalised and applied also to other practically important problems, such as intrusion detection. Additionally, a fuzzy ontology is applied in a novel consensus model for group decision making. By combining the fuzzy ontology with Semantic Web affiliated techniques novel applications have been designed. These applications show how the mobilisation of knowledge can successfully utilize also imprecise data. An important part of decision making processes is undeniably aggregation, which in combination with a fuzzy ontology provides a promising basis for demonstrating the benefits that one can retrieve from handling imprecise data. The new aggregation operators defined in the thesis often provide new possibilities to handle imprecision and expert opinions. This is demonstrated through both theoretical examples and practical implementations. This thesis shows the benefits of utilizing all the available data one possess, including imprecise data. By combining the concept of fuzzy ontology with the Semantic Web movement, it aspires to show the corporate world and industry the benefits of embracing fuzzy ontologies and imprecision.
Resumo:
This master thesis work introduces the fuzzy tolerance/equivalence relation and its application in cluster analysis. The work presents about the construction of fuzzy equivalence relations using increasing generators. Here, we investigate and research on the role of increasing generators for the creation of intersection, union and complement operators. The objective is to develop different varieties of fuzzy tolerance/equivalence relations using different varieties of increasing generators. At last, we perform a comparative study with these developed varieties of fuzzy tolerance/equivalence relations in their application to a clustering method.