897 resultados para Minkowski metric
Resumo:
There are many ways to generate geometrical models for numerical simulation, and most of them start with a segmentation step to extract the boundaries of the regions of interest. This paper presents an algorithm to generate a patient-specific three-dimensional geometric model, based on a tetrahedral mesh, without an initial extraction of contours from the volumetric data. Using the information directly available in the data, such as gray levels, we built a metric to drive a mesh adaptation process. The metric is used to specify the size and orientation of the tetrahedral elements everywhere in the mesh. Our method, which produces anisotropic meshes, gives good results with synthetic and real MRI data. The resulting model quality has been evaluated qualitatively and quantitatively by comparing it with an analytical solution and with a segmentation made by an expert. Results show that our method gives, in 90% of the cases, as good or better meshes as a similar isotropic method, based on the accuracy of the volume reconstruction for a given mesh size. Moreover, a comparison of the Hausdorff distances between adapted meshes of both methods and ground-truth volumes shows that our method decreases reconstruction errors faster. Copyright © 2015 John Wiley & Sons, Ltd.
Resumo:
The main purpose of the study is to extent concept of the class of spaces called ‘generalized metric spaces’ to fuzzy context and investigates its properties. Any class of spaces defined by a property possessed by all metric spaces could technically be called as a class of ‘generalized metric spaces’. But the term is meant for classes, which are ‘close’ to metrizable spaces in some under certain kinds of mappings. The theory of generalized metric spaces is closely related to ‘metrization theory’. The class of spaces likes Morita’s M- spaces, Borges’s w-spaces, Arhangelskii’s p-spaces, Okuyama’s spaces have major roles in the theory of generalized metric spaces. The thesis introduces fuzzy metrizable spaces, fuzzy submetrizable spaces and proves some characterizations of fuzzy submetrizable spaces, and also the fuzzy generalized metric spaces like fuzzy w-spaces, fuzzy Moore spaces, fuzzy M-spaces, fuzzy k-spaces, fuzzy -spaces study of their properties, prove some equivalent conditions for fuzzy p-spaces. The concept of a network is one of the most useful tools in the theory of generalized metric spaces. The -spaces is a class of generalized metric spaces having a network.
Resumo:
The present study on chaos and fractals in general topological spaces. Chaos theory originated with the work of Edward Lorenz. The phenomenon which changes order into disorder is known as chaos. Theory of fractals has its origin with the frame work of Benoit Mandelbrot in 1977. Fractals are irregular objects. In this study different properties of topological entropy in chaos spaces are studied, which also include hyper spaces. Topological entropy is a measures to determine the complexity of the space, and compare different chaos spaces. The concept of fractals can’t be extended to general topological space fast it involves Hausdorff dimensions. The relations between hausdorff dimension and packing dimension. Regular sets in Metric spaces using packing measures, regular sets were defined in IR” using Hausdorff measures. In this study some properties of self similar sets and partial self similar sets. We can associate a directed graph to each partial selfsimilar set. Dimension properties of partial self similar sets are studied using this graph. Introduce superself similar sets as a generalization of self similar sets and also prove that chaotic self similar self are dense in hyper space. The study concludes some relationships between different kinds of dimension and fractals. By defining regular sets through packing dimension in the same way as regular sets defined by K. Falconer through Hausdorff dimension, and different properties of regular sets also.
Resumo:
This thesis entitled Geometric algebra and einsteins electron: Deterministic field theories .The work in this thesis clarifies an important part of Koga’s theory.Koga also developed a theory of the electron incorporating its gravitational field, using his substitutes for Einstein’s equation.The third chapter deals with the application of geometric algebra to Koga’s approach of the Dirac equation. In chapter 4 we study some aspects of the work of mendel sachs (35,36,37,).Sachs stated aim is to show how quantum mechanics is a limiting case of a general relativistic unified field theory.Chapter 5 contains a critical study and comparison of the work of Koga and Sachs. In particular, we conclude that the incorporation of Mach’s principle is not necessary in Sachs’s treatment of the Dirac equation.
Resumo:
The service quality of any sector has two major aspects namely technical and functional. Technical quality can be attained by maintaining technical specification as decided by the organization. Functional quality refers to the manner which service is delivered to customer which can be assessed by the customer feed backs. A field survey was conducted based on the management tool SERVQUAL, by designing 28 constructs under 7 dimensions of service quality. Stratified sampling techniques were used to get 336 valid responses and the gap scores of expectations and perceptions are analyzed using statistical techniques to identify the weakest dimension. To assess the technical aspects of availability six months live outage data of base transceiver were collected. The statistical and exploratory techniques were used to model the network performance. The failure patterns have been modeled in competing risk models and probability distribution of service outage and restorations were parameterized. Since the availability of network is a function of the reliability and maintainability of the network elements, any service provider who wishes to keep up their service level agreements on availability should be aware of the variability of these elements and its effects on interactions. The availability variations were studied by designing a discrete time event simulation model with probabilistic input parameters. The probabilistic distribution parameters arrived from live data analysis was used to design experiments to define the availability domain of the network under consideration. The availability domain can be used as a reference for planning and implementing maintenance activities. A new metric is proposed which incorporates a consistency index along with key service parameters that can be used to compare the performance of different service providers. The developed tool can be used for reliability analysis of mobile communication systems and assumes greater significance in the wake of mobile portability facility. It is also possible to have a relative measure of the effectiveness of different service providers.
Resumo:
Mathematical models are often used to describe physical realities. However, the physical realities are imprecise while the mathematical concepts are required to be precise and perfect. Even mathematicians like H. Poincare worried about this. He observed that mathematical models are over idealizations, for instance, he said that only in Mathematics, equality is a transitive relation. A first attempt to save this situation was perhaps given by K. Menger in 1951 by introducing the concept of statistical metric space in which the distance between points is a probability distribution on the set of nonnegative real numbers rather than a mere nonnegative real number. Other attempts were made by M.J. Frank, U. Hbhle, B. Schweizer, A. Sklar and others. An aspect in common to all these approaches is that they model impreciseness in a probabilistic manner. They are not able to deal with situations in which impreciseness is not apparently of a probabilistic nature. This thesis is confined to introducing and developing a theory of fuzzy semi inner product spaces.
Resumo:
With the stabilization of world finfish catches in general, and the depletion of a number of fish stocks that used to support industrial-scale fisheries, increasing attention is now being paid, to the so-called unconventional marine resources, which include many species of cephalopods. One of such important cephalopod resource is the tropical Indo-Pacific pelagic oceanic squid Sthenoteuthis oualaniensis. It is the most abundant large sized squid in the Indo- Pacific region with an estimated biomass of 8-11 metric tons. However, its distribution, biology, life cycle and nutrient value in the south west coast of India are still poorly known. So any new information of this species in the waters off the south west coast of India has important scientific significance for effective and rational utilization of this Oceanic fishery resources, especially during the time of depletion of shallow water resources. In view of that this study investigated different aspects of the Sthenoteuthis oualaniensis, such as morphometry, growth, mortality, maturation, spawning, food, feeding and biochemical composition in the south west coast of India to understand its possible prospective importance for commercial fishing and management of its fishery
Resumo:
The composition and variability of heterotrophic bacteria along the shelf sediments of south west coast of India and its relationship with the sediment biogeochemistry was investigated. The bacterial abundance ranged from 1.12 x 103 – 1.88 x 106 CFU g-1 dry wt. of sediment. The population showed significant positive correlation with silt (r = 0.529, p< 0.05), organic carbon (OC) (r = 0.679, p< 0.05), total nitrogen (TN) (r = 0.638, p< 0.05), total protein (TPRT) (r = 0.615, p< 0.05) and total carbohydrate (TCHO) (r = 0.675, p< 0.05) and significant negative correlation with sand (r = -0.488, p< 0.05). Community was mainly composed of Bacillus, Alteromonas, Vibrio, Coryneforms, Micrococcus, Planococcus, Staphylococcus, Moraxella, Alcaligenes, Enterobacteriaceae, Pseudomonas, Acinetobacter, Flavobacterium and Aeromonas. BIOENV analysis explained the best possible environmental parameters i.e., carbohydrate, total nitrogen, temperature, pH and sand at 50m depth and organic matter, BPC, protein, lipid and temperature at 200m depth controlling the distribution pattern of heterotrophic bacterial population in shelf sediments. The Principal Component Analysis (PCA) of the environmental variables showed that the first and second principal component accounted for 65% and 30.6% of the data variance respectively. Canonical Correspondence Analysis (CCA) revealed a strong correspondence between bacterial distribution and environmental variables in the study area. Moreover, non-metric MDS (Multidimensional Scaling) analysis demarcated the northern and southern latitudes of the study area based on the bioavailable organic matter
Resumo:
Axial brain slices containing similar anatomical structures are retrieved using features derived from the histogram of Local binary pattern (LBP). A rotation invariant description of texture in terms of texture patterns and their strength is obtained with the incorporation of local variance to the LBP, called Modified LBP (MOD-LBP). In this paper, we compare Histogram based Features of LBP (HF/LBP), against Histogram based Features of MOD-LBP (HF/MOD-LBP) in retrieving similar axial brain images. We show that replacing local histogram with a local distance transform based similarity metric further improves the performance of MOD-LBP based image retrieval
Resumo:
The present work is intended to discuss various properties and reliability aspects of higher order equilibrium distributions in continuous, discrete and multivariate cases, which contribute to the study on equilibrium distributions. At first, we have to study and consolidate the existing literature on equilibrium distributions. For this we need some basic concepts in reliability. These are being discussed in the 2nd chapter, In Chapter 3, some identities connecting the failure rate functions and moments of residual life of the univariate, non-negative continuous equilibrium distributions of higher order and that of the baseline distribution are derived. These identities are then used to characterize the generalized Pareto model, mixture of exponentials and gamma distribution. An approach using the characteristic functions is also discussed with illustrations. Moreover, characterizations of ageing classes using stochastic orders has been discussed. Part of the results of this chapter has been reported in Nair and Preeth (2009). Various properties of equilibrium distributions of non-negative discrete univariate random variables are discussed in Chapter 4. Then some characterizations of the geo- metric, Waring and negative hyper-geometric distributions are presented. Moreover, the ageing properties of the original distribution and nth order equilibrium distribu- tions are compared. Part of the results of this chapter have been reported in Nair, Sankaran and Preeth (2012). Chapter 5 is a continuation of Chapter 4. Here, several conditions, in terms of stochastic orders connecting the baseline and its equilibrium distributions are derived. These conditions can be used to rede_ne certain ageing notions. Then equilibrium distributions of two random variables are compared in terms of various stochastic orders that have implications in reliability applications. In Chapter 6, we make two approaches to de_ne multivariate equilibrium distribu- tions of order n. Then various properties including characterizations of higher order equilibrium distributions are presented. Part of the results of this chapter have been reported in Nair and Preeth (2008). The Thesis is concluded in Chapter 7. A discussion on further studies on equilib- rium distributions is also made in this chapter.
Resumo:
Biometrics is an efficient technology with great possibilities in the area of security system development for official and commercial applications. The biometrics has recently become a significant part of any efficient person authentication solution. The advantage of using biometric traits is that they cannot be stolen, shared or even forgotten. The thesis addresses one of the emerging topics in Authentication System, viz., the implementation of Improved Biometric Authentication System using Multimodal Cue Integration, as the operator assisted identification turns out to be tedious, laborious and time consuming. In order to derive the best performance for the authentication system, an appropriate feature selection criteria has been evolved. It has been seen that the selection of too many features lead to the deterioration in the authentication performance and efficiency. In the work reported in this thesis, various judiciously chosen components of the biometric traits and their feature vectors are used for realizing the newly proposed Biometric Authentication System using Multimodal Cue Integration. The feature vectors so generated from the noisy biometric traits is compared with the feature vectors available in the knowledge base and the most matching pattern is identified for the purpose of user authentication. In an attempt to improve the success rate of the Feature Vector based authentication system, the proposed system has been augmented with the user dependent weighted fusion technique.
Resumo:
This paper presents a new paradigm for signal reconstruction and superresolution, Correlation Kernel Analysis (CKA), that is based on the selection of a sparse set of bases from a large dictionary of class- specific basis functions. The basis functions that we use are the correlation functions of the class of signals we are analyzing. To choose the appropriate features from this large dictionary, we use Support Vector Machine (SVM) regression and compare this to traditional Principal Component Analysis (PCA) for the tasks of signal reconstruction, superresolution, and compression. The testbed we use in this paper is a set of images of pedestrians. This paper also presents results of experiments in which we use a dictionary of multiscale basis functions and then use Basis Pursuit De-Noising to obtain a sparse, multiscale approximation of a signal. The results are analyzed and we conclude that 1) when used with a sparse representation technique, the correlation function is an effective kernel for image reconstruction and superresolution, 2) for image compression, PCA and SVM have different tradeoffs, depending on the particular metric that is used to evaluate the results, 3) in sparse representation techniques, L_1 is not a good proxy for the true measure of sparsity, L_0, and 4) the L_epsilon norm may be a better error metric for image reconstruction and compression than the L_2 norm, though the exact psychophysical metric should take into account high order structure in images.
Resumo:
Observations in daily practice are sometimes registered as positive values larger then a given threshold α. The sample space is in this case the interval (α,+∞), α > 0, which can be structured as a real Euclidean space in different ways. This fact opens the door to alternative statistical models depending not only on the assumed distribution function, but also on the metric which is considered as appropriate, i.e. the way differences are measured, and thus variability
Resumo:
The performances of high-speed network communications frequently rest with the distribution of data-stream. In this paper, a dynamic data-stream balancing architecture based on link information is introduced and discussed firstly. Then the algorithms for simultaneously acquiring the passing nodes and links of a path between any two source-destination nodes rapidly, as well as a dynamic data-stream distribution planning are proposed. Some related topics such as data fragment disposal, fair service, etc. are further studied and discussed. Besides, the performance and efficiency of proposed algorithms, especially for fair service and convergence, are evaluated through a demonstration with regard to the rate of bandwidth utilization. Hoping the discussion presented here can be helpful to application developers in selecting an effective strategy for planning the distribution of data-stream.
Resumo:
One of the disadvantages of old age is that there is more past than future: this, however, may be turned into an advantage if the wealth of experience and, hopefully, wisdom gained in the past can be reflected upon and throw some light on possible future trends. To an extent, then, this talk is necessarily personal, certainly nostalgic, but also self critical and inquisitive about our understanding of the discipline of statistics. A number of almost philosophical themes will run through the talk: search for appropriate modelling in relation to the real problem envisaged, emphasis on sensible balances between simplicity and complexity, the relative roles of theory and practice, the nature of communication of inferential ideas to the statistical layman, the inter-related roles of teaching, consultation and research. A list of keywords might be: identification of sample space and its mathematical structure, choices between transform and stay, the role of parametric modelling, the role of a sample space metric, the underused hypothesis lattice, the nature of compositional change, particularly in relation to the modelling of processes. While the main theme will be relevance to compositional data analysis we shall point to substantial implications for general multivariate analysis arising from experience of the development of compositional data analysis…