38 resultados para Geometry of Fuzzy sets
em Aston University Research Archive
Resumo:
Descriptions of vegetation communities are often based on vague semantic terms describing species presence and dominance. For this reason, some researchers advocate the use of fuzzy sets in the statistical classification of plant species data into communities. In this study, spatially referenced vegetation abundance values collected from Greek phrygana were analysed by ordination (DECORANA), and classified on the resulting axes using fuzzy c-means to yield a point data-set representing local memberships in characteristic plant communities. The fuzzy clusters matched vegetation communities noted in the field, which tended to grade into one another, rather than occupying discrete patches. The fuzzy set representation of the community exploited the strengths of detrended correspondence analysis while retaining richer information than a TWINSPAN classification of the same data. Thus, in the absence of phytosociological benchmarks, meaningful and manageable habitat information could be derived from complex, multivariate species data. We also analysed the influence of the reliability of different surveyors' field observations by multiple sampling at a selected sample location. We show that the impact of surveyor error was more severe in the Boolean than the fuzzy classification. © 2007 Springer.
Resumo:
The study here highlights the potential that analytical methods based on Knowledge Discovery in Databases (KDD) methodologies have to aid both the resolution of unstructured marketing/business problems and the process of scholarly knowledge discovery. The authors present and discuss the application of KDD in these situations prior to the presentation of an analytical method based on fuzzy logic and evolutionary algorithms, developed to analyze marketing databases and uncover relationships among variables. A detailed implementation on a pre-existing data set illustrates the method. © 2012 Published by Elsevier Inc.
Resumo:
On 20 October 1997 the London Stock Exchange introduced a new trading system called SETS. This system was to replace the dealer system SEAQ, which had been in operation since 1986. Using the iterative sum of squares test introduced by Inclan and Tiao (1994), we investigate whether there was a change in the unconditional variance of opening and closing returns, at the time SETS was introduced. We show that for the FTSE-100 stocks traded on SETS, on the days following its introduction, there was a widespread increase in the volatility of both opening and closing returns. However, no synchronous volatility changes were found to be associated with the FTSE-100 index or FTSE-250 stocks. We conclude therefore that the introduction of the SETS trading mechanism caused an increase in noise at the time the system was introduced.
Resumo:
Grafting of antioxidants and other modifiers onto polymers by reactive extrusion, has been performed successfully by the Polymer Processing and Performance Group at Aston University. Traditionally the optimum conditions for the grafting process have been established within a Brabender internal mixer. Transfer of this batch process to a continuous processor, such as an extruder, has, typically, been empirical. To have more confidence in the success of direct transfer of the process requires knowledge of, and comparison between, residence times, mixing intensities, shear rates and flow regimes in the internal mixer and in the continuous processor.The continuous processor chosen for the current work in the closely intermeshing, co-rotating twin-screw extruder (CICo-TSE). CICo-TSEs contain screw elements that convey material with a self-wiping action and are widely used for polymer compounding and blending. Of the different mixing modules contained within the CICo-TSE, the trilobal elements, which impose intensive mixing, and the mixing discs, which impose extensive mixing, are of importance when establishing the intensity of mixing. In this thesis, the flow patterns within the various regions of the single-flighted conveying screw elements and within both the trilobal element and mixing disc zones of a Betol BTS40 CICo-TSE, have been modelled using the computational fluid dynamics package Polyflow. A major obstacle encountered when solving the flow problem within all of these sets of elements, arises from both the complex geometry and the time-dependent flow boundaries as the elements rotate about their fixed axes. Simulation of the time dependent boundaries was overcome by selecting a number of sequential 2D and 3D geometries, used to represent partial mixing cycles. The flow fields were simulated using the ideal rheological properties of polypropylene and characterised in terms of velocity vectors, shear stresses generated and a parameter known as the mixing efficiency. The majority of the large 3D simulations were performed on the Cray J90 supercomputer situated at the Rutherford-Appleton laboratories, with pre- and postprocessing operations achieved via a Silicon Graphics Indy workstation. A mechanical model was constructed consisting of various CICo-TSE elements rotating within a transparent outer barrel. A technique has been developed using coloured viscous clays whereby the flow patterns and mixing characteristics within the CICo-TSE may be visualised. In order to test and verify the simulated predictions, the patterns observed within the mechanical model were compared with the flow patterns predicted by the computational model. The flow patterns within the single-flighted conveying screw elements in particular, showed good agreement between the experimental and simulated results.
Resumo:
This work is undertaken in the attempt to understand the processes at work at the cutting edge of the twist drill. Extensive drill life testing performed by the University has reinforced a survey of previously published information. This work demonstrated that there are two specific aspects of drilling which have not previously been explained comprehensively. The first concerns the interrelating of process data between differing drilling situations, There is no method currently available which allows the cutting geometry of drilling to be defined numerically so that such comparisons, where made, are purely subjective. Section one examines this problem by taking as an example a 4.5mm drill suitable for use with aluminium. This drill is examined using a prototype solid modelling program to explore how the required numerical information may be generated. The second aspect is the analysis of drill stiffness. What aspects of drill stiffness provide the very great difference in performance between short flute length, medium flute length and long flute length drills? These differences exist between drills of identical point geometry and the practical superiority of short drills has been known to shop floor drilling operatives since drilling was first introduced. This problem has been dismissed repeatedly as over complicated but section two provides a first approximation and shows that at least for smaller drills of 4. 5mm the effects are highly significant. Once the cutting action of the twist drill is defined geometrically there is a huge body of machinability data that becomes applicable to the drilling process. Work remains to interpret the very high inclination angles of the drill cutting process in terms of cutting forces and tool wear but aspects of drill design may already be looked at in new ways with the prospect of a more analytical approach rather than the present mix of experience and trial and error. Other problems are specific to the twist drill, such as the behaviour of the chips in the flute. It is now possible to predict the initial direction of chip flow leaving the drill cutting edge. For the future the parameters of further chip behaviour may also be explored within this geometric model.
Resumo:
The investigations described in this thesis concern the molecular interactions between polar solute molecules and various aromatic compounds in solution. Three different physical methods were employed. Nuclear magnetic resonance (n.m.r.) spectroscopy was used to determine the nature and strength of the interactions and the geometry of the transient complexes formed. Cryoscopic studies were used to provide information on the stoichiometry of the complexes. Dielectric constant studies were conducted in an attempt to confirm and supplement the spectroscopic investigations. The systems studied were those between nitromethane, chloroform, acetonitrile (solutes) and various methyl substituted benzenes. In the n.m.r. work the dependence of the solute chemical shift upon the compositions of the solutions was determined. From this the equilibrium quotients (K) for the formation of each complex and the shift induced in the solute proton by the aromatic in the complex were evaluated. The thermodynamic parameters for the interactions were obtained from the determination of K at several temperatures. The stoichiometries of the complexes obtained from cryoscopic studies were found to agree with those deduced from spectroscopic investigations. For most systems it is suggested that only one type of complex, of 1:1 stiochiometry, predominates except that for the acetonitrile-benzene system a 1:2 complex is formed. Two sets of dielectric studies were conducted, the first to show that the nature of the interaction is dipole-induced dipole and the second to calculate K. The equilibrium quotients obtained from spectroscopic and dielectric studies are compared. Time-averaged geometries of the complexes are proposed. The orientation of solute, with respect to the aromatic for the 1:1 complexes, appears to be the one in which the solute lies symmetrically about the aromatic six-fold axis whereas for the 1:2 complex, a sandwich structure is proposed. It is suggested that the complexes are formed through a dipole-induced dipole interaction and steric factors play some part in the complex formation.
Resumo:
This paper introduces a new mathematical method for improving the discrimination power of data envelopment analysis and to completely rank the efficient decision-making units (DMUs). Fuzzy concept is utilised. For this purpose, first all DMUs are evaluated with the CCR model. Thereafter, the resulted weights for each output are considered as fuzzy sets and are then converted to fuzzy numbers. The introduced model is a multi-objective linear model, endpoints of which are the highest and lowest of the weighted values. An added advantage of the model is its ability to handle the infeasibility situation sometimes faced by previously introduced models.
Resumo:
Developing a means of predicting tool life has been and continues to be a focus of much research effort. A common experience in attempting to replicate such efforts is an inability to achieve the levels of agreement between theory and practice of the original researcher or to extrapolate the work to different materials or cutting conditions to those originally used. This thesis sets out to examine why most equations or models when replicated do not give good agreements. One reason which was found is that researchers in wear prediction, their predictions are limited because they generally fail to properly identify the nature of wear mechanisms operative in their study. Also they fail to identify or recognise factors having a significant influence on wear such as bar diameter. Also in this research the similarities and differences between the two processes of single point turning and drilling are examined through a series of tests. A literature survey was undertaken in wear and wear prediction. As a result it was found that there was a paucity in information and research in the work of drilling as compared to the turning operation. This was extended to the lack of standards that exist for the drilling operation. One reason for this scarcity in information on drilling is due to the complexity of the drilling and the tool geometry of the drill. In the comparative drilling and turning tests performed in this work, the same tool material; HSS, and similar work material was used in order to eliminate the differences which may occur due to this factor. Results of the tests were evaluated and compared for the two operations and SEM photographs were taken for the chips produced. Specific test results were obtained for the cutting temperatures and forces of the tool. It was found that cutting temperature is influenced by various factors like tool geometry and cutting speed, and the temperature itself influenced the tool wear and wear mechanisms that act on the tool. It was found and proven that bar diameter influences the temperature, a factor not considered previously.
Resumo:
This thesis explores the process of developing a principled approach for translating a model of mental-health risk expertise into a probabilistic graphical structure. Probabilistic graphical structures can be a combination of graph and probability theory that provide numerous advantages when it comes to the representation of domains involving uncertainty, domains such as the mental health domain. In this thesis the advantages that probabilistic graphical structures offer in representing such domains is built on. The Galatean Risk Screening Tool (GRiST) is a psychological model for mental health risk assessment based on fuzzy sets. In this thesis the knowledge encapsulated in the psychological model was used to develop the structure of the probability graph by exploiting the semantics of the clinical expertise. This thesis describes how a chain graph can be developed from the psychological model to provide a probabilistic evaluation of risk that complements the one generated by GRiST’s clinical expertise by the decomposing of the GRiST knowledge structure in component parts, which were in turned mapped into equivalent probabilistic graphical structures such as Bayesian Belief Nets and Markov Random Fields to produce a composite chain graph that provides a probabilistic classification of risk expertise to complement the expert clinical judgements
Resumo:
Linear programming (LP) is the most widely used optimization technique for solving real-life problems because of its simplicity and efficiency. Although conventional LP models require precise data, managers and decision makers dealing with real-world optimization problems often do not have access to exact values. Fuzzy sets have been used in the fuzzy LP (FLP) problems to deal with the imprecise data in the decision variables, objective function and/or the constraints. The imprecisions in the FLP problems could be related to (1) the decision variables; (2) the coefficients of the decision variables in the objective function; (3) the coefficients of the decision variables in the constraints; (4) the right-hand-side of the constraints; or (5) all of these parameters. In this paper, we develop a new stepwise FLP model where fuzzy numbers are considered for the coefficients of the decision variables in the objective function, the coefficients of the decision variables in the constraints and the right-hand-side of the constraints. In the first step, we use the possibility and necessity relations for fuzzy constraints without considering the fuzzy objective function. In the subsequent step, we extend our method to the fuzzy objective function. We use two numerical examples from the FLP literature for comparison purposes and to demonstrate the applicability of the proposed method and the computational efficiency of the procedures and algorithms. © 2013-IOS Press and the authors. All rights reserved.
Developing a probabilistic graphical structure from a model of mental-health clinical risk expertise
Resumo:
This paper explores the process of developing a principled approach for translating a model of mental-health risk expertise into a probabilistic graphical structure. The Galatean Risk Screening Tool [1] is a psychological model for mental health risk assessment based on fuzzy sets. This paper details how the knowledge encapsulated in the psychological model was used to develop the structure of the probability graph by exploiting the semantics of the clinical expertise. These semantics are formalised by a detailed specification for an XML structure used to represent the expertise. The component parts were then mapped to equivalent probabilistic graphical structures such as Bayesian Belief Nets and Markov Random Fields to produce a composite chain graph that provides a probabilistic classification of risk expertise to complement the expert clinical judgements. © Springer-Verlag 2010.
Resumo:
Objective: The aims of this study were to establish the structure of the potent anticonvulsant enaminone methyl 4-(4′-bromophenyl)amino-6-methyl-2- oxocyclohex-3-en-1-oate (E139), and to determine the energetically preferred conformation of the molecule, which is responsible for the biological activity. Materials and Methods: The structure of the molecule was determined by X-ray crystallography. Theoretical ab initio calculations with different basis sets were used to compare the energies of the different enantiomers and to other structurally related compounds. Results: The X-ray crystal structure revealed two independent molecules of E139, both with absolute configuration C11(S), C12(R), and their inverse. Ab initio calculations with the 6-31G, 3-21G and STO-3G basis sets confirmed that the C11(S), C12(R) enantiomer with both substituents equatorial had the lowest energy. Compared to relevant crystal structures, the geometry of the theoretical structures shows a longer C-N and shorter C=O distance with more cyclohexene ring puckering in the isolated molecule. Conclusion: Based on a pharmacophoric model it is suggested that the enaminone system HN-C=C-C=O and the 4-bromophenyl group in E139 are necessary to confer anticonvulsant property that could lead to the design of new and improved anticonvulsant agents. Copyright © 2003 S. Karger AG, Basel.
Resumo:
Neural networks can be regarded as statistical models, and can be analysed in a Bayesian framework. Generalisation is measured by the performance on independent test data drawn from the same distribution as the training data. Such performance can be quantified by the posterior average of the information divergence between the true and the model distributions. Averaging over the Bayesian posterior guarantees internal coherence; Using information divergence guarantees invariance with respect to representation. The theory generalises the least mean squares theory for linear Gaussian models to general problems of statistical estimation. The main results are: (1)~the ideal optimal estimate is always given by average over the posterior; (2)~the optimal estimate within a computational model is given by the projection of the ideal estimate to the model. This incidentally shows some currently popular methods dealing with hyperpriors are in general unnecessary and misleading. The extension of information divergence to positive normalisable measures reveals a remarkable relation between the dlt dual affine geometry of statistical manifolds and the geometry of the dual pair of Banach spaces Ld and Ldd. It therefore offers conceptual simplification to information geometry. The general conclusion on the issue of evaluating neural network learning rules and other statistical inference methods is that such evaluations are only meaningful under three assumptions: The prior P(p), describing the environment of all the problems; the divergence Dd, specifying the requirement of the task; and the model Q, specifying available computing resources.
Resumo:
The whole set of the nickel(II) complexes with no derivatized edta-type hexadentate ligands has been investigated from their structural and electronic properties. Two more complexes have been prepared in order to complete the whole set: trans(O5)-[Ni(ED3AP)]2- and trans(O5O6)-[Ni(EDA3P)]2- complexes. trans(O5) geometry has been verified crystallographically and trans(O5O6) geometry of the second complex has been predicted by the DFT theory and spectral analysis. Mutual dependance has been established between: the number of the five-membered carboxylate rings, octahedral/tetrahedral deviation of metal-ligand/nitrogen-neighbour-atom angles and charge-transfer energies (CTE) calculated by the Morokuma’s energetic decomposition analysis; energy of the absorption bands and HOMO–LUMO gap.