867 resultados para Geometry of Fuzzy sets
Resumo:
The scattering behaviour of fractal based metallodielectric structures loaded over metallic targets of different shapes such as flat plate, cylinder and dihedral corner reflector are investigated for both TE and TM polarizations of the incident wave. Out of the various fractal structures studied,square Sierpinski carpet structure is found to give backscattering reduction for an appreciable range of frequencies. The frequency of minimum backscattering depends on the geometry of the structure as well as on the thickness of the substrate. This structure when loaded over a dihedral corner reflector is showing an enhancement in RCS for corner angles other than 90◦.
Resumo:
This thesis describes the development and analysis of an Isosceles Trapezoidal Dielectric Resonator Antenna (ITDRA) by realizing different DR orientations with suitable feed configurations enabling it to be used as multiband, dual band dual polarized and wideband applications. The motivation for this work has been inspired by the need for compact, high efficient, low cost antenna suitable for multi band application, dual band dual polarized operation and broadband operation with the possibility of using with MICs, and to ensure less expensive, more efficient and quality wireless communication systems. To satisfy these challenging demands a novel shaped Dielectric Resonator (DR) is fabricated and investigated for the possibility of above required properties by trying out different orientations of the DR on a simple microstrip feed and with slotted ground plane as well. The thesis initially discusses and evaluates recent and past developments taken place within the microwave industry on this topic through a concise review of literature. Then the theoretical aspects of DRA and different feeding techniques are described. Following this, fabrication and characterization of DRA is explained. To achieve the desired requirements as above both simulations and experimental measurements were undertaken. A 3-D finite element method (FEM) electromagnetic simulation tool, HFSSTM by Agilent, is used to determine the optimum geometry of the dielectric resonator. It was found to be useful in producing approximate results although it had some limitations. A numerical analysis technique, finite difference time domain (FDTD) is used for validating the results of wide band design at the end. MATLAB is used for modeling the ITDR and implementing FDTD analysis. In conclusion this work offers a new, efficient and relatively simple alternative for antennas to be used for multiple requirements in the wireless communication system.
Resumo:
Frames are the most widely used structural system for multistorey buildings. A building frame is a three dimensional discrete structure consisting of a number of high rise bays in two directions at right angles to each other in the vertical plane. Multistorey frames are a three dimensional lattice structure which are statically indeterminate. Frames sustain gravity loads and resist lateral forces acting on it. India lies at the north westem end of the Indo-Australian tectonic plate and is identified as an active tectonic area. Under horizontal shaking of the ground, horizontal inertial forces are generated at the floor levels of a multistorey frame. These lateral inertia forces are transferred by the floor slab to the beams, subsequently to the columns and finally to the soil through the foundation system. There are many parameters that affect the response of a structure to ground excitations such as, shape, size and geometry of the structure, type of foundation, soil characteristics etc. The Soil Structure Interaction (SS1) effects refer to the influence of the supporting soil medium on the behavior of the structure when it is subjected to different types of loads. Interaction between the structure and its supporting foundation and soil, which is a complete system, has been modeled with finite elements. Numerical investigations have been carried out on a four bay, twelve storeyed regular multistorey frame considering depth of fixity at ground level, at characteristic depth of pile and at full depth. Soil structure interaction effects have been studied by considering two models for soil viz., discrete and continuum. Linear static analysis has been conducted to study the interaction effects under static load. Free vibration analysis and further shock spectrum analysis has been conducted to study the interaction effects under time dependent loads. The study has been extended to four types of soil viz., laterite, sand, alluvium and layered.The structural responses evaluated in the finite element analysis are bending moment, shear force and axial force for columns, and bending moment and shear force for beams. These responses increase with increase in the founding depth; however these responses show minimal increase beyond the characteristic length of pile. When the soil structure interaction effects are incorporated in the analysis, the aforesaid responses of the frame increases upto the characteristic depth and decreases when the frame has been analysed for the full depth. It has been observed that shock spectrum analysis gives wide variation of responses in the frame compared to linear elastic analysis. Both increase and decrease in responses have been observed in the interior storeys. The good congruence shown by the two finite element models viz., discrete and continuum in linear static analysis has been absent in shock spectrum analysis.
Resumo:
Mathematical models are often used to describe physical realities. However, the physical realities are imprecise while the mathematical concepts are required to be precise and perfect. Even mathematicians like H. Poincare worried about this. He observed that mathematical models are over idealizations, for instance, he said that only in Mathematics, equality is a transitive relation. A first attempt to save this situation was perhaps given by K. Menger in 1951 by introducing the concept of statistical metric space in which the distance between points is a probability distribution on the set of nonnegative real numbers rather than a mere nonnegative real number. Other attempts were made by M.J. Frank, U. Hbhle, B. Schweizer, A. Sklar and others. An aspect in common to all these approaches is that they model impreciseness in a probabilistic manner. They are not able to deal with situations in which impreciseness is not apparently of a probabilistic nature. This thesis is confined to introducing and developing a theory of fuzzy semi inner product spaces.
Resumo:
It is believed that every fuzzy generalization should be formulated in such a way that it contain the ordinary set theoretic notion as a special case. Therefore the definition of fuzzy topology in the line of C.L.CHANG E9] with an arbitrary complete and distributive lattice as the membership set is taken. Almost all the results proved and presented in this thesis can, in a sense, be called generalizations of corresponding results in ordinary set theory and set topology. However the tools and the methods have to be in many of the cases, new. Here an attempt is made to solve the problem of complementation in the lattice of fuzzy topologies on a set. It is proved that in general, the lattice of fuzzy topologies is not complemented. Complements of some fuzzy topologies are found out. It is observed that (L,X) is not uniquely complemented. However, a complete analysis of the problem of complementation in the lattice of fuzzy topologies is yet to be found out
Resumo:
It is found that crystals of molecular nanomagnets exhibit enhanced magnetic relaxation when placed inside a resonant cavity. A strong dependence of the magnetization curve on the geometry of the cavity has been observed, providing indirect evidence of the coherent microwave radiation by the crystals. A similar dependence has been found for a crystal placed between the Fabry-Perot superconducting mirrors.
Resumo:
The magnetic-field dependence of the magnetization of cylinders, disks, and spheres of pure type-I superconducting lead was investigated by means of isothermal measurements of first magnetization curves and hysteresis cycles. Depending on the geometry of the sample and the direction and intensity of the applied magnetic field, the intermediate state exhibits different irreversible features that become particularly highlighted in minor hysteresis cycles. The irreversibility is noticeably observed in cylinders and disks only when the magnetic field is parallel to the axis of revolution and is very subtle in spheres. When the magnetic field decreases from the normal state, the irreversibility appears at a temperature-dependent value whose distance to the thermodynamic critical field depends on the sample geometry. The irreversible features in the disks are altered when they are submitted to an annealing process. These results agree well with very recent high-resolution magneto-optical experiments in similar materials that were interpreted in terms of transitions between different topological structures for the flux configuration in the intermediate state. A discussion of the relative role of geometrical barriers for flux entry and exit and pinning effects as responsible for the magnetic irreversibility is given.
Resumo:
The work is intended to study the following important aspects of document image processing and develop new methods. (1) Segmentation ofdocument images using adaptive interval valued neuro-fuzzy method. (2) Improving the segmentation procedure using Simulated Annealing technique. (3) Development of optimized compression algorithms using Genetic Algorithm and parallel Genetic Algorithm (4) Feature extraction of document images (5) Development of IV fuzzy rules. This work also helps for feature extraction and foreground and background identification. The proposed work incorporates Evolutionary and hybrid methods for segmentation and compression of document images. A study of different neural networks used in image processing, the study of developments in the area of fuzzy logic etc is carried out in this work
Resumo:
Mathematical models are often used to describe physical realities. However, the physical realities are imprecise while the mathematical concepts are required to be precise and perfect. The 1st chapter give a brief summary of the arithmetic of fuzzy real numbers and the fuzzy normed algebra M(I). Also we explain a few preliminary definitions and results required in the later chapters. Fuzzy real numbers are introduced by Hutton,B [HU] and Rodabaugh, S.E[ROD]. Our definition slightly differs from this with an additional minor restriction. The definition of Clementina Felbin [CL1] is entirely different. The notations of [HU]and [M;Y] are retained inspite of the slight difference in the concept.the 3rd chapter In this chapter using the completion M'(I) of M(I) we give a fuzzy extension of real Hahn-Banch theorem. Some consequences of this extension are obtained. The idea of real fuzzy linear functional on fuzzy normed linear space is introduced. Some of its properties are studied. In the complex case we get only a slightly weaker analogue for the Hahn-Banch theorem, than the one [B;N] in the crisp case
Resumo:
Relativistic density functional theory is widely applied in molecular calculations with heavy atoms, where relativistic and correlation effects are on the same footing. Variational stability of the Dirac Hamiltonian is a very important field of research from the beginning of relativistic molecular calculations on, among efforts for accuracy, efficiency, and density functional formulation, etc. Approximations of one- or two-component methods and searching for suitable basis sets are two major means for good projection power against the negative continuum. The minimax two-component spinor linear combination of atomic orbitals (LCAO) is applied in the present work for both light and super-heavy one-electron systems, providing good approximations in the whole energy spectrum, being close to the benchmark minimax finite element method (FEM) values and without spurious and contaminated states, in contrast to the presence of these artifacts in the traditional four-component spinor LCAO. The variational stability assures that minimax LCAO is bounded from below. New balanced basis sets, kinetic and potential defect balanced (TVDB), following the minimax idea, are applied with the Dirac Hamiltonian. Its performance in the same super-heavy one-electron quasi-molecules shows also very good projection capability against variational collapse, as the minimax LCAO is taken as the best projection to compare with. The TVDB method has twice as many basis coefficients as four-component spinor LCAO, which becomes now linear and overcomes the disadvantage of great time-consumption in the minimax method. The calculation with both the TVDB method and the traditional LCAO method for the dimers with elements in group 11 of the periodic table investigates their difference. New bigger basis sets are constructed than in previous research, achieving high accuracy within the functionals involved. Their difference in total energy is much smaller than the basis incompleteness error, showing that the traditional four-spinor LCAO keeps enough projection power from the numerical atomic orbitals and is suitable in research on relativistic quantum chemistry. In scattering investigations for the same comparison purpose, the failure of the traditional LCAO method of providing a stable spectrum with increasing size of basis sets is contrasted to the TVDB method, which contains no spurious states already without pre-orthogonalization of basis sets. Keeping the same conditions including the accuracy of matrix elements shows that the variational instability prevails over the linear dependence of the basis sets. The success of the TVDB method manifests its capability not only in relativistic quantum chemistry but also for scattering and under the influence of strong external electronic and magnetic fields. The good accuracy in total energy with large basis sets and the good projection property encourage wider research on different molecules, with better functionals, and on small effects.
Resumo:
Background: The most common application of imputation is to infer genotypes of a high-density panel of markers on animals that are genotyped for a low-density panel. However, the increase in accuracy of genomic predictions resulting from an increase in the number of markers tends to reach a plateau beyond a certain density. Another application of imputation is to increase the size of the training set with un-genotyped animals. This strategy can be particularly successful when a set of closely related individuals are genotyped. ----- Methods: Imputation on completely un-genotyped dams was performed using known genotypes from the sire of each dam, one offspring and the offspring’s sire. Two methods were applied based on either allele or haplotype frequencies to infer genotypes at ambiguous loci. Results of these methods and of two available software packages were compared. Quality of imputation under different population structures was assessed. The impact of using imputed dams to enlarge training sets on the accuracy of genomic predictions was evaluated for different populations, heritabilities and sizes of training sets. ----- Results: Imputation accuracy ranged from 0.52 to 0.93 depending on the population structure and the method used. The method that used allele frequencies performed better than the method based on haplotype frequencies. Accuracy of imputation was higher for populations with higher levels of linkage disequilibrium and with larger proportions of markers with more extreme allele frequencies. Inclusion of imputed dams in the training set increased the accuracy of genomic predictions. Gains in accuracy ranged from close to zero to 37.14%, depending on the simulated scenario. Generally, the larger the accuracy already obtained with the genotyped training set, the lower the increase in accuracy achieved by adding imputed dams. ----- Conclusions: Whenever a reference population resembling the family configuration considered here is available, imputation can be used to achieve an extra increase in accuracy of genomic predictions by enlarging the training set with completely un-genotyped dams. This strategy was shown to be particularly useful for populations with lower levels of linkage disequilibrium, for genomic selection on traits with low heritability, and for species or breeds for which the size of the reference population is limited.
Resumo:
Mesh generation is an important step inmany numerical methods.We present the “HierarchicalGraphMeshing” (HGM)method as a novel approach to mesh generation, based on algebraic graph theory.The HGM method can be used to systematically construct configurations exhibiting multiple hierarchies and complex symmetry characteristics. The hierarchical description of structures provided by the HGM method can be exploited to increase the efficiency of multiscale and multigrid methods. In this paper, the HGMmethod is employed for the systematic construction of super carbon nanotubes of arbitrary order, which present a pertinent example of structurally and geometrically complex, yet highly regular, structures. The HGMalgorithm is computationally efficient and exhibits good scaling characteristics. In particular, it scales linearly for super carbon nanotube structures and is working much faster than geometry-based methods employing neighborhood search algorithms. Its modular character makes it conducive to automatization. For the generation of a mesh, the information about the geometry of the structure in a given configuration is added in a way that relates geometric symmetries to structural symmetries. The intrinsically hierarchic description of the resulting mesh greatly reduces the effort of determining mesh hierarchies for multigrid and multiscale applications and helps to exploit symmetry-related methods in the mechanical analysis of complex structures.
Resumo:
The objects with which the hand interacts with may significantly change the dynamics of the arm. How does the brain adapt control of arm movements to this new dynamic? We show that adaptation is via composition of a model of the task's dynamics. By exploring generalization capabilities of this adaptation we infer some of the properties of the computational elements with which the brain formed this model: the elements have broad receptive fields and encode the learned dynamics as a map structured in an intrinsic coordinate system closely related to the geometry of the skeletomusculature. The low--level nature of these elements suggests that they may represent asset of primitives with which a movement is represented in the CNS.
Resumo:
The Aitchison vector space structure for the simplex is generalized to a Hilbert space structure A2(P) for distributions and likelihoods on arbitrary spaces. Central notations of statistics, such as Information or Likelihood, can be identified in the algebraical structure of A2(P) and their corresponding notions in compositional data analysis, such as Aitchison distance or centered log ratio transform. In this way very elaborated aspects of mathematical statistics can be understood easily in the light of a simple vector space structure and of compositional data analysis. E.g. combination of statistical information such as Bayesian updating, combination of likelihood and robust M-estimation functions are simple additions/ perturbations in A2(Pprior). Weighting observations corresponds to a weighted addition of the corresponding evidence. Likelihood based statistics for general exponential families turns out to have a particularly easy interpretation in terms of A2(P). Regular exponential families form finite dimensional linear subspaces of A2(P) and they correspond to finite dimensional subspaces formed by their posterior in the dual information space A2(Pprior). The Aitchison norm can identified with mean Fisher information. The closing constant itself is identified with a generalization of the cummulant function and shown to be Kullback Leiblers directed information. Fisher information is the local geometry of the manifold induced by the A2(P) derivative of the Kullback Leibler information and the space A2(P) can therefore be seen as the tangential geometry of statistical inference at the distribution P. The discussion of A2(P) valued random variables, such as estimation functions or likelihoods, give a further interpretation of Fisher information as the expected squared norm of evidence and a scale free understanding of unbiased reasoning
Resumo:
A joint distribution of two discrete random variables with finite support can be displayed as a two way table of probabilities adding to one. Assume that this table has n rows and m columns and all probabilities are non-null. This kind of table can be seen as an element in the simplex of n · m parts. In this context, the marginals are identified as compositional amalgams, conditionals (rows or columns) as subcompositions. Also, simplicial perturbation appears as Bayes theorem. However, the Euclidean elements of the Aitchison geometry of the simplex can also be translated into the table of probabilities: subspaces, orthogonal projections, distances. Two important questions are addressed: a) given a table of probabilities, which is the nearest independent table to the initial one? b) which is the largest orthogonal projection of a row onto a column? or, equivalently, which is the information in a row explained by a column, thus explaining the interaction? To answer these questions three orthogonal decompositions are presented: (1) by columns and a row-wise geometric marginal, (2) by rows and a columnwise geometric marginal, (3) by independent two-way tables and fully dependent tables representing row-column interaction. An important result is that the nearest independent table is the product of the two (row and column)-wise geometric marginal tables. A corollary is that, in an independent table, the geometric marginals conform with the traditional (arithmetic) marginals. These decompositions can be compared with standard log-linear models. Key words: balance, compositional data, simplex, Aitchison geometry, composition, orthonormal basis, arithmetic and geometric marginals, amalgam, dependence measure, contingency table