18 resultados para Concept vector
em Cochin University of Science
Resumo:
School of Legal Studies, Cochin University of Science & Technology
Resumo:
Sharing of information with those in need of it has always been an idealistic goal of networked environments. With the proliferation of computer networks, information is so widely distributed among systems, that it is imperative to have well-organized schemes for retrieval and also discovery. This thesis attempts to investigate the problems associated with such schemes and suggests a software architecture, which is aimed towards achieving a meaningful discovery. Usage of information elements as a modelling base for efficient information discovery in distributed systems is demonstrated with the aid of a novel conceptual entity called infotron.The investigations are focused on distributed systems and their associated problems. The study was directed towards identifying suitable software architecture and incorporating the same in an environment where information growth is phenomenal and a proper mechanism for carrying out information discovery becomes feasible. An empirical study undertaken with the aid of an election database of constituencies distributed geographically, provided the insights required. This is manifested in the Election Counting and Reporting Software (ECRS) System. ECRS system is a software system, which is essentially distributed in nature designed to prepare reports to district administrators about the election counting process and to generate other miscellaneous statutory reports.Most of the distributed systems of the nature of ECRS normally will possess a "fragile architecture" which would make them amenable to collapse, with the occurrence of minor faults. This is resolved with the help of the penta-tier architecture proposed, that contained five different technologies at different tiers of the architecture.The results of experiment conducted and its analysis show that such an architecture would help to maintain different components of the software intact in an impermeable manner from any internal or external faults. The architecture thus evolved needed a mechanism to support information processing and discovery. This necessitated the introduction of the noveI concept of infotrons. Further, when a computing machine has to perform any meaningful extraction of information, it is guided by what is termed an infotron dictionary.The other empirical study was to find out which of the two prominent markup languages namely HTML and XML, is best suited for the incorporation of infotrons. A comparative study of 200 documents in HTML and XML was undertaken. The result was in favor ofXML.The concept of infotron and that of infotron dictionary, which were developed, was applied to implement an Information Discovery System (IDS). IDS is essentially, a system, that starts with the infotron(s) supplied as clue(s), and results in brewing the information required to satisfy the need of the information discoverer by utilizing the documents available at its disposal (as information space). The various components of the system and their interaction follows the penta-tier architectural model and therefore can be considered fault-tolerant. IDS is generic in nature and therefore the characteristics and the specifications were drawn up accordingly. Many subsystems interacted with multiple infotron dictionaries that were maintained in the system.In order to demonstrate the working of the IDS and to discover the information without modification of a typical Library Information System (LIS), an Information Discovery in Library Information System (lDLIS) application was developed. IDLIS is essentially a wrapper for the LIS, which maintains all the databases of the library. The purpose was to demonstrate that the functionality of a legacy system could be enhanced with the augmentation of IDS leading to information discovery service. IDLIS demonstrates IDS in action. IDLIS proves that any legacy system could be augmented with IDS effectively to provide the additional functionality of information discovery service.Possible applications of IDS and scope for further research in the field are covered.
Resumo:
Multivariate lifetime data arise in various forms including recurrent event data when individuals are followed to observe the sequence of occurrences of a certain type of event; correlated lifetime when an individual is followed for the occurrence of two or more types of events, or when distinct individuals have dependent event times. In most studies there are covariates such as treatments, group indicators, individual characteristics, or environmental conditions, whose relationship to lifetime is of interest. This leads to a consideration of regression models.The well known Cox proportional hazards model and its variations, using the marginal hazard functions employed for the analysis of multivariate survival data in literature are not sufficient to explain the complete dependence structure of pair of lifetimes on the covariate vector. Motivated by this, in Chapter 2, we introduced a bivariate proportional hazards model using vector hazard function of Johnson and Kotz (1975), in which the covariates under study have different effect on two components of the vector hazard function. The proposed model is useful in real life situations to study the dependence structure of pair of lifetimes on the covariate vector . The well known partial likelihood approach is used for the estimation of parameter vectors. We then introduced a bivariate proportional hazards model for gap times of recurrent events in Chapter 3. The model incorporates both marginal and joint dependence of the distribution of gap times on the covariate vector . In many fields of application, mean residual life function is considered superior concept than the hazard function. Motivated by this, in Chapter 4, we considered a new semi-parametric model, bivariate proportional mean residual life time model, to assess the relationship between mean residual life and covariates for gap time of recurrent events. The counting process approach is used for the inference procedures of the gap time of recurrent events. In many survival studies, the distribution of lifetime may depend on the distribution of censoring time. In Chapter 5, we introduced a proportional hazards model for duration times and developed inference procedures under dependent (informative) censoring. In Chapter 6, we introduced a bivariate proportional hazards model for competing risks data under right censoring. The asymptotic properties of the estimators of the parameters of different models developed in previous chapters, were studied. The proposed models were applied to various real life situations.
Resumo:
In this thesis we are studying possible invariants in hydrodynamics and hydromagnetics. The concept of flux preservation and line preservation of vector fields, especially vorticity vector fields, have been studied from the very beginning of the study of fluid mechanics by Helmholtz and others. In ideal magnetohydrodynamic flows the magnetic fields satisfy the same conservation laws as that of vorticity field in ideal hydrodynamic flows. Apart from these there are many other fields also in ideal hydrodynamic and magnetohydrodynamic flows which preserves flux across a surface or whose vector lines are preserved. A general study using this analogy had not been made for a long time. Moreover there are other physical quantities which are also invariant under the flow, such as Ertel invariant. Using the calculus of differential forms Tur and Yanovsky classified the possible invariants in hydrodynamics. This mathematical abstraction of physical quantities to topological objects is needed for an elegant and complete analysis of invariants.Many authors used a four dimensional space-time manifold for analysing fluid flows. We have also used such a space-time manifold in obtaining invariants in the usual three dimensional flows.In chapter one we have discussed the invariants related to vorticity field using vorticity field two form w2 in E4. Corresponding to the invariance of four form w2 ^ w2 we have got the invariance of the quantity E. w. We have shown that in an isentropic flow this quantity is an invariant over an arbitrary volume.In chapter three we have extended this method to any divergence-free frozen-in field. In a four dimensional space-time manifold we have defined a closed differential two form and its potential one from corresponding to such a frozen-in field. Using this potential one form w1 , it is possible to define the forms dw1 , w1 ^ dw1 and dw1 ^ dw1 . Corresponding to the invariance of the four form we have got an additional invariant in the usual hydrodynamic flows, which can not be obtained by considering three dimensional space.In chapter four we have classified the possible integral invariants associated with the physical quantities which can be expressed using one form or two form in a three dimensional flow. After deriving some general results which hold for an arbitrary dimensional manifold we have illustrated them in the context of flows in three dimensional Euclidean space JR3. If the Lie derivative of a differential p-form w is not vanishing,then the surface integral of w over all p-surfaces need not be constant of flow. Even then there exist some special p-surfaces over which the integral is a constant of motion, if the Lie derivative of w satisfies certain conditions. Such surfaces can be utilised for investigating the qualitative properties of a flow in the absence of invariance over all p-surfaces. We have also discussed the conditions for line preservation and surface preservation of vector fields. We see that the surface preservation need not imply the line preservation. We have given some examples which illustrate the above results. The study given in this thesis is a continuation of that started by Vedan et.el. As mentioned earlier, they have used a four dimensional space-time manifold to obtain invariants of flow from variational formulation and application of Noether's theorem. This was from the point of view of hydrodynamic stability studies using Arnold's method. The use of a four dimensional manifold has great significance in the study of knots and links. In the context of hydrodynamics, helicity is a measure of knottedness of vortex lines. We are interested in the use of differential forms in E4 in the study of vortex knots and links. The knowledge of surface invariants given in chapter 4 may also be utilised for the analysis of vortex and magnetic reconnections.
Resumo:
The doctoral thesis focuses on the Studies on fuzzy Matroids and related topics.Since the publication of the classical paper on fuzzy sets by L. A. Zadeh in 1965.the theory of fuzzy mathematics has gained more and more recognition from many researchers in a wide range of scientific fields. Among various branches of pure and applied mathematics, convexity was one of the areas where the notion of fuzzy set was applied. Many researchers have been involved in extending the notion of abstract convexity to the broader framework of fuzzy setting. As a result, a number of concepts have been formulated and explored. However. many concepts are yet to be fuzzified. The main objective of this thesis was to extend some basic concepts and results in convexity theory to the fuzzy setting. The concept like matroids, independent structures. classical convex invariants like Helly number, Caratheodoty number, Radon number and Exchange number form an important area of study in crisp convexity theory. In this thesis, we try to generalize some of these concepts to the fuzzy setting. Finally, we have defined different types of fuzzy matroids derived from vector spaces and discussed some of their properties.
Resumo:
The present work is organized into six chapters. Bivariate extension of Burr system is the subject matter of Chapter II. The author proposes to introduce a general structure for the family in two dimensions and present some properties of such a system. Also in Chapter II some new distributions, which are bivariate extension of univariate distributions in Burr (1942) is presented.. In Chapter III, concentrates on characterization problems of different forms of bivariate Burr system. A detailed study of the distributional properties of each member of the Burr system has not been undertaken in literature. With this aim in mind in Chapter IV is discussed with two forms of bivariate Burr III distribution. In Chapter V the author Considers the type XII, type II and type IX distributions. Present work concludes with Chapter VI by pointing out the multivariate extension for Burr system. Also in this chapter the concept of multivariate reversed hazard rates as scalar and vector quantity is introduced.
Resumo:
This thesis attempts to investigate the problems associated with such schemes and suggests a software architecture, which is aimed towards achieving a meaningful discovery. Usage of information elements as a modelling base for efficient information discovery in distributed systems is demonstrated with the aid of a novel conceptual entity called infotron. The investigations are focused on distributed systems and their associated problems. The study was directed towards identifying suitable software architecture and incorporating the same in an environment where information growth is phenomenal and a proper mechanism for carrying out information discovery becomes feasible. An empirical study undertaken with the aid of an election database of constituencies distributed geographically, provided the insights required. This is manifested in the Election Counting and Reporting Software (ECRS) System. ECRS system is a software system, which is essentially distributed in nature designed to prepare reports to district administrators about the election counting process and to generate other miscellaneous statutory reports.
Resumo:
The term reliability of an equipment or device is often meant to indicate the probability that it carries out the functions expected of it adequately or without failure and within specified performance limits at a given age for a desired mission time when put to use under the designated application and operating environmental stress. A broad classification of the approaches employed in relation to reliability studies can be made as probabilistic and deterministic, where the main interest in the former is to device tools and methods to identify the random mechanism governing the failure process through a proper statistical frame work, while the latter addresses the question of finding the causes of failure and steps to reduce individual failures thereby enhancing reliability. In the probabilistic attitude to which the present study subscribes to, the concept of life distribution, a mathematical idealisation that describes the failure times, is fundamental and a basic question a reliability analyst has to settle is the form of the life distribution. It is for no other reason that a major share of the literature on the mathematical theory of reliability is focussed on methods of arriving at reasonable models of failure times and in showing the failure patterns that induce such models. The application of the methodology of life time distributions is not confined to the assesment of endurance of equipments and systems only, but ranges over a wide variety of scientific investigations where the word life time may not refer to the length of life in the literal sense, but can be concieved in its most general form as a non-negative random variable. Thus the tools developed in connection with modelling life time data have found applications in other areas of research such as actuarial science, engineering, biomedical sciences, economics, extreme value theory etc.
Resumo:
This paper presents the application of wavelet processing in the domain of handwritten character recognition. To attain high recognition rate, robust feature extractors and powerful classifiers that are invariant to degree of variability of human writing are needed. The proposed scheme consists of two stages: a feature extraction stage, which is based on Haar wavelet transform and a classification stage that uses support vector machine classifier. Experimental results show that the proposed method is effective
Resumo:
In our study we use a kernel based classification technique, Support Vector Machine Regression for predicting the Melting Point of Drug – like compounds in terms of Topological Descriptors, Topological Charge Indices, Connectivity Indices and 2D Auto Correlations. The Machine Learning model was designed, trained and tested using a dataset of 100 compounds and it was found that an SVMReg model with RBF Kernel could predict the Melting Point with a mean absolute error 15.5854 and Root Mean Squared Error 19.7576
Resumo:
HINDI
Resumo:
HINDI