29 resultados para Method Development
Resumo:
The study is entitled “HUMAN RESOURCES DEVELOPMENT IN HIGHER EDUCATION IN KERALA”. The concept “Human Resource Development” is of high value in business and industry and has been used and applied since years. In industry and business the ‘human’ element is considred as a resource and hence its development and protection is very essential and inevitable. Of all the factors of production, human resource is the only factor having rational faculty and therefore, it must be handled with utmost care. Right recruitment, right training and right induction followed by faultless monitoring and welfare measures are but decisive factors in business and industiy. Altogether there is a constant attention up on human factor there. But this is not a practice at all in education. So far there has not been any such measure of care and close watch and performance analysis of human resource on education front. This may be the main reason for lack of accountability in the sphere of education. The present study reveals the importance of introducing HRD practices in higher educational institutions in Kerala. In order to promise human capital formation through education, it is basic requirement. The higher educational institutions must follow the method of industry and commerce because education can be treated as an industry in service sector. There also we can follow the methods of right recruitment, right training and promotion, delegation, performance analysis and accountability checking of human resource. HRD is a powerful idea of transformation of human being into highly productive and contributing factor The HRD of students is the sum total of HRD of teachers. Reminding the primordial usage ‘Yatha Raja Thadha Praja’ the quality of faculty resembles in students. The quality of administrative staff in colleges also affects the quality of higher education. Hence, it is high time to introduce the managerial method of HRD with all its paraphernalia in higher educational institutions so as to assure proper human capital formation in higher education in India.
Resumo:
Controlling the inorganic nitrogen by manipulating carbon / nitrogen ratio is a method gaining importance in aquaculture systems. Nitrogen control is induced by feeding bacteria with carbohydrates and through the subsequent uptake of nitrogen from the water for the synthesis of microbial proteins. The relationship between addition of carbohydrates, reduction of ammonium and the production of microbial protein depends on the microbial conversion coefficient. The carbon / nitrogen ratio in the microbial biomass is related to the carbon contents of the added material. The addition of carbonaceous substrate was found to reduce inorganic nitrogen in shrimp culture ponds and the resultant microbial proteins are taken up by shrimps. Thus, part of the feed protein is replaced and feeding costs are reduced in culture systems.The use of various locally available substrates for periphyton based aquaculture practices increases production and profitability .However, these techniques for extensive shrimp farming have not so far been evaluated. Moreover, an evaluation of artificial substrates together with carbohydrate source based farming system in reducing inorganic nitrogen production in culture systems has not yet been carried-out. Furthermore, variations in water and soil quality, periphyton production and shrimp production of the whole system have also not been determined so-far.This thesis starts with a general introduction , a brief review of the most relevant literature, results of various experiments and concludes with a summary (Chapter — 9). The chapters are organised conforming to the objectives of the present study. The major objectives of this thesis are, to improve the sustainability of shrimp farming by carbohydrate addition and periphyton substrate based shrimp production and to improve the nutrient utilisation in aquaculture systems.
Resumo:
Identification and Control of Non‐linear dynamical systems are challenging problems to the control engineers.The topic is equally relevant in communication,weather prediction ,bio medical systems and even in social systems,where nonlinearity is an integral part of the system behavior.Most of the real world systems are nonlinear in nature and wide applications are there for nonlinear system identification/modeling.The basic approach in analyzing the nonlinear systems is to build a model from known behavior manifest in the form of system output.The problem of modeling boils down to computing a suitably parameterized model,representing the process.The parameters of the model are adjusted to optimize a performanace function,based on error between the given process output and identified process/model output.While the linear system identification is well established with many classical approaches,most of those methods cannot be directly applied for nonlinear system identification.The problem becomes more complex if the system is completely unknown but only the output time series is available.Blind recognition problem is the direct consequence of such a situation.The thesis concentrates on such problems.Capability of Artificial Neural Networks to approximate many nonlinear input-output maps makes it predominantly suitable for building a function for the identification of nonlinear systems,where only the time series is available.The literature is rich with a variety of algorithms to train the Neural Network model.A comprehensive study of the computation of the model parameters,using the different algorithms and the comparison among them to choose the best technique is still a demanding requirement from practical system designers,which is not available in a concise form in the literature.The thesis is thus an attempt to develop and evaluate some of the well known algorithms and propose some new techniques,in the context of Blind recognition of nonlinear systems.It also attempts to establish the relative merits and demerits of the different approaches.comprehensiveness is achieved in utilizing the benefits of well known evaluation techniques from statistics. The study concludes by providing the results of implementation of the currently available and modified versions and newly introduced techniques for nonlinear blind system modeling followed by a comparison of their performance.It is expected that,such comprehensive study and the comparison process can be of great relevance in many fields including chemical,electrical,biological,financial and weather data analysis.Further the results reported would be of immense help for practical system designers and analysts in selecting the most appropriate method based on the goodness of the model for the particular context.
Resumo:
This thesis presents the results of an investigation conducted for the development of a new type of feed horn antenna called "Simulated Scalar Feed". A schematic presentation of the work is given below. A review of the past important work done in the field of conventional/multimode electromagnetic horn antennas is presented in the first part of the second chapter. The work carried out on corrugated horns and surfaces are included in the second part of the review. In the third part, work on dielectric and dielectric loaded metal horns are reviewed. In all the parts of the review, special emphasis is given to theoretical design considerations. The methodology adopted for the experimental investigations is presented in the third chapter. The instrumentation utilized and thThis thesis presents the results of an investigation conducted for the development of a new type of feed horn antenna called "Simulated Scalar Feed". A schematic presentation of the work is given below. A review of the past important work done in the field of conventional/multimode electromagnetic horn antennas is presented in the first part of the second chapter. The work carried out on corrugated horns and surfaces are included in the second part of the review. In the third part, work on dielectric and dielectric loaded metal horns are reviewed. In all the parts of the review, special emphasis is given to theoretical design considerations. The methodology adopted for the experimental investigations is presented in the third chapter. The instrumentation utilized and the details of fabrication ofe details of fabrication of the new simulated scalar feed are described. The method of measurements of radiation characteristics of the antenna are also explained in this chapter. In the fourth chapter the outcome of the experimental results of the investigations carried out on horn antennas fabricated with different physical dimensions and different parameters for the E—plane boundary walls are highlighted. The theoretical explanation used to explain the experimental results is given in the fifth chapter of the thesis. A comparison between the experimental and the theoretical results is also presented in this chapter. In chapter six, the conclusions drawn from the experimental as well as the theoretical investigations are discussed. The advantages and features of the newly developed simulated scalar feed is examined in this chapter. Scope of further investigations in this field is also discussed at the end of this chapter.
Resumo:
Electroanalytical techniques represent a class of powerful and versatile analytical method which is based on the electrical properties of a solution of the analyte when it is made part of an electrochemical cell. They offer high sensitivity, accuracy, precision and a large linear dynamic range. The cost of instrumentation is relatively low compared to other instrumental methods of analysis. Many solid state electrochemical sensors have been commercialised nowadays. Potentiometry is a very simple electroanalytical technique with extraordinary analytical capabilities. Since valinomycin was introduced as an ionophore for K+, Ion Selective Electrodes have become one of the best studied and understood analytical devices. It can be used for the determination of substances ranging from simple inorganic ions to complex organic molecules. It is a very attractive option owing to the wide range of applications and ease of the use of the instruments employed. They also possess the advantages of short response time, high selectivity and very low detection limits. Moreover, analysis by these electrodes is non-destructive and adaptable to small sample volumes. It has become a standard technique for medical researchers, biologists, geologists and environmental specialists. This thesis presents the synthesis and characterisation of five ionophores. Based on these ionophores, nine potentiometric sensors are fabricated for the determination of ions such as Pb2+, Mn2+, Ni2+, Cu2+ and Sal- ion (Salicylate ion). The electrochemical characterisation and analytical application studies of the developed sensors are also described. The thesis is divided into eight chapters
Resumo:
This thesis is an outcome of the investigations carried out on the development of an Artificial Neural Network (ANN) model to implement 2-D DFT at high speed. A new definition of 2-D DFT relation is presented. This new definition enables DFT computation organized in stages involving only real addition except at the final stage of computation. The number of stages is always fixed at 4. Two different strategies are proposed. 1) A visual representation of 2-D DFT coefficients. 2) A neural network approach. The visual representation scheme can be used to compute, analyze and manipulate 2D signals such as images in the frequency domain in terms of symbols derived from 2x2 DFT. This, in turn, can be represented in terms of real data. This approach can help analyze signals in the frequency domain even without computing the DFT coefficients. A hierarchical neural network model is developed to implement 2-D DFT. Presently, this model is capable of implementing 2-D DFT for a particular order N such that ((N))4 = 2. The model can be developed into one that can implement the 2-D DFT for any order N upto a set maximum limited by the hardware constraints. The reported method shows a potential in implementing the 2-D DF T in hardware as a VLSI / ASIC
Resumo:
Biometrics deals with the physiological and behavioral characteristics of an individual to establish identity. Fingerprint based authentication is the most advanced biometric authentication technology. The minutiae based fingerprint identification method offer reasonable identification rate. The feature minutiae map consists of about 70-100 minutia points and matching accuracy is dropping down while the size of database is growing up. Hence it is inevitable to make the size of the fingerprint feature code to be as smaller as possible so that identification may be much easier. In this research, a novel global singularity based fingerprint representation is proposed. Fingerprint baseline, which is the line between distal and intermediate phalangeal joint line in the fingerprint, is taken as the reference line. A polygon is formed with the singularities and the fingerprint baseline. The feature vectors are the polygonal angle, sides, area, type and the ridge counts in between the singularities. 100% recognition rate is achieved in this method. The method is compared with the conventional minutiae based recognition method in terms of computation time, receiver operator characteristics (ROC) and the feature vector length. Speech is a behavioural biometric modality and can be used for identification of a speaker. In this work, MFCC of text dependant speeches are computed and clustered using k-means algorithm. A backpropagation based Artificial Neural Network is trained to identify the clustered speech code. The performance of the neural network classifier is compared with the VQ based Euclidean minimum classifier. Biometric systems that use a single modality are usually affected by problems like noisy sensor data, non-universality and/or lack of distinctiveness of the biometric trait, unacceptable error rates, and spoof attacks. Multifinger feature level fusion based fingerprint recognition is developed and the performances are measured in terms of the ROC curve. Score level fusion of fingerprint and speech based recognition system is done and 100% accuracy is achieved for a considerable range of matching threshold
Resumo:
The contemporary explanations and discussions of the relationship between medicine and health, and society centre around assumptions that can be broadly classified into three setsl. The first set considers health and illness as predominantly ‘biological’ and therefore, having nothing to do with the social and economic environment in which it occurs. The struggle to combat illness therefore, lies entirely within the purview of modern medicine which is neutral to economic or social change. The second considers practice of medicine as a natural science. It allows the doctor to separate himself from his subject matter, the patient, in the samelway as the natural scientist is assumed to separate himself from his subject matter, the natural world. As a 'science' and with the scientific method, it can produce unchallengable and autonomous body of knowledge which is free from the wider social and economic context. The third, different from the above, recognises the relationship between health, medicine and society. Social and environmental aspects as determinants of illness or of health comes to sharp focus here and it assigns to medicine the status of a mediator, the only viable mediator, between people and diseases. In this scheme of things the usefulness of medicine is unquestionable but the problem lies in not having enough of it to go arounds.
Resumo:
Cerebral glioma is the most prevalent primary brain tumor, which are classified broadly into low and high grades according to the degree of malignancy. High grade gliomas are highly malignant which possess a poor prognosis, and the patients survive less than eighteen months after diagnosis. Low grade gliomas are slow growing, least malignant and has better response to therapy. To date, histological grading is used as the standard technique for diagnosis, treatment planning and survival prediction. The main objective of this thesis is to propose novel methods for automatic extraction of low and high grade glioma and other brain tissues, grade detection techniques for glioma using conventional magnetic resonance imaging (MRI) modalities and 3D modelling of glioma from segmented tumor slices in order to assess the growth rate of tumors. Two new methods are developed for extracting tumor regions, of which the second method, named as Adaptive Gray level Algebraic set Segmentation Algorithm (AGASA) can also extract white matter and grey matter from T1 FLAIR an T2 weighted images. The methods were validated with manual Ground truth images, which showed promising results. The developed methods were compared with widely used Fuzzy c-means clustering technique and the robustness of the algorithm with respect to noise is also checked for different noise levels. Image texture can provide significant information on the (ab)normality of tissue, and this thesis expands this idea to tumour texture grading and detection. Based on the thresholds of discriminant first order and gray level cooccurrence matrix based second order statistical features three feature sets were formulated and a decision system was developed for grade detection of glioma from conventional T2 weighted MRI modality.The quantitative performance analysis using ROC curve showed 99.03% accuracy for distinguishing between advanced (aggressive) and early stage (non-aggressive) malignant glioma. The developed brain texture analysis techniques can improve the physician’s ability to detect and analyse pathologies leading to a more reliable diagnosis and treatment of disease. The segmented tumors were also used for volumetric modelling of tumors which can provide an idea of the growth rate of tumor; this can be used for assessing response to therapy and patient prognosis.
Resumo:
Optimum conditions and experimental details for the formation of v-Fe203 from goethite have been worked out. In another method, a cheap complexing medium of starch was employed for precipitating acicular ferrous oxalate, which on decomposition in nitrogen and subsequent oxidation yielded acicular y-Fe203. On the basis of thermal decomposition in dry and moist nitrogen, DTA, XRD, GC and thermodynamic arguments, the mechanism of decomposition was elucidated. New materials obtained by doping ~'-Fe203 with 1-16 atomic percent magnesium, cobalt, nickel and copper, were synthesised and characterized
Resumo:
In this paper an attempt has been made to determine the number of Premature Ventricular Contraction (PVC) cycles accurately from a given Electrocardiogram (ECG) using a wavelet constructed from multiple Gaussian functions. It is difficult to assess the ECGs of patients who are continuously monitored over a long period of time. Hence the proposed method of classification will be helpful to doctors to determine the severity of PVC in a patient. Principal Component Analysis (PCA) and a simple classifier have been used in addition to the specially developed wavelet transform. The proposed wavelet has been designed using multiple Gaussian functions which when summed up looks similar to that of a normal ECG. The number of Gaussians used depends on the number of peaks present in a normal ECG. The developed wavelet satisfied all the properties of a traditional continuous wavelet. The new wavelet was optimized using genetic algorithm (GA). ECG records from Massachusetts Institute of Technology-Beth Israel Hospital (MIT-BIH) database have been used for validation. Out of the 8694 ECG cycles used for evaluation, the classification algorithm responded with an accuracy of 97.77%. In order to compare the performance of the new wavelet, classification was also performed using the standard wavelets like morlet, meyer, bior3.9, db5, db3, sym3 and haar. The new wavelet outperforms the rest
Resumo:
In the past, natural resources were plentiful and people were scarce. But the situation is rapidly reversing. Our challenge is to find a way to balance human consumption and nature’s limited productivity in order to ensure that our communities are sustainable locally, regionally and globally. Kochi, the commercial capital of Kerala, South India and the second most important city next to Mumbai on the Western coast is a land having a wide variety of residential environments. Due to rapid population growth, changing lifestyles, food habits and living standards, institutional weaknesses, improper choice of technology and public apathy, the present pattern of the city can be classified as that of haphazard growth with typical problems characteristics of unplanned urban development. Ecological Footprint Analysis (EFA) is physical accounting method, developed by William Rees and M. Wackernagel, focusing on land appropriation using land as its “currency”. It provides a means for measuring and communicating human induced environmental impacts upon the planet. The aim of applying EFA to Kochi city is to quantify the consumption and waste generation of a population and to compare it with the existing biocapacity. By quantifying the ecological footprint we can formulate strategies to reduce the footprint and there by having a sustainable living. In this paper, an attempt is made to explore the tool Ecological Footprint Analysis and calculate and analyse the ecological footprint of the residential areas of Kochi city. The paper also discusses and analyses the waste footprint of the city. An attempt is also made to suggest strategies to reduce the footprint thereby making the city sustainable
Resumo:
Super Resolution problem is an inverse problem and refers to the process of producing a High resolution (HR) image, making use of one or more Low Resolution (LR) observations. It includes up sampling the image, thereby, increasing the maximum spatial frequency and removing degradations that arise during the image capture namely aliasing and blurring. The work presented in this thesis is based on learning based single image super-resolution. In learning based super-resolution algorithms, a training set or database of available HR images are used to construct the HR image of an image captured using a LR camera. In the training set, images are stored as patches or coefficients of feature representations like wavelet transform, DCT, etc. Single frame image super-resolution can be used in applications where database of HR images are available. The advantage of this method is that by skilfully creating a database of suitable training images, one can improve the quality of the super-resolved image. A new super resolution method based on wavelet transform is developed and it is better than conventional wavelet transform based methods and standard interpolation methods. Super-resolution techniques based on skewed anisotropic transform called directionlet transform are developed to convert a low resolution image which is of small size into a high resolution image of large size. Super-resolution algorithm not only increases the size, but also reduces the degradations occurred during the process of capturing image. This method outperforms the standard interpolation methods and the wavelet methods, both visually and in terms of SNR values. Artifacts like aliasing and ringing effects are also eliminated in this method. The super-resolution methods are implemented using, both critically sampled and over sampled directionlets. The conventional directionlet transform is computationally complex. Hence lifting scheme is used for implementation of directionlets. The new single image super-resolution method based on lifting scheme reduces computational complexity and thereby reduces computation time. The quality of the super resolved image depends on the type of wavelet basis used. A study is conducted to find the effect of different wavelets on the single image super-resolution method. Finally this new method implemented on grey images is extended to colour images and noisy images
Resumo:
The 20th century witnessed the extensive use of microwaves in industrial, scientific and medical fields. The major hindrance to many developments in the ISM field is the lack of knowledge about the effect of microwaves on materials used in various applications. The study of the interaction of microwaves with materials demanded the knowledge of the dielectric properties of these materials. However, the dielectric properties of many of these materials are still unknown or less studied. This thesis is an effort to shed light into the dielectric properties of some materials which are used in medical, scientific and industrial fields. Microwave phantoms are those materials used in microwave simulation applications. Effort has been taken to develop and characterize low cost, eco-friendly phantoms from Biomaterials and Bioceramics. The interaction of microwaves with living tissues paved way to the development of materials for electromagnetic shielding. Materials with good conductivity/absorption properties could be used for EMI shielding applications. Conducting polymer materials are developed and characterized in this context. The materials which are developed and analyzed in this thesis are Biomaterials, Bioceramics and Conducting polymers. The use of materials of biological origin in scientific and medical applications provides an eco-friendly pathway. The microwave characterization of the materials were done using cavity material perturbation method. Low cost and ecofriendly biomaterial films were developed from Arrowroot and Chitosan. The developed films could be used in applications such as microwave phantom material, capsule material in pharmaceutical applications, trans-dermal patch material and eco-friendly Band-Aids. Bioceramics with better bioresorption and biocompatibility were synthesized. Bioceramics such as Hydroxyapatite, Beta tricalcium phosphate and Biphasic Calcium Phosphate were studied. The prepared bioceramics could be used as phantom material representing Collagen, Bone marrow, Human abdominal wall fat and Human chest fat. Conducting polymers- based on Polyaniline, are developed and characterized. The developed materials can be used in electromagnetic shielding applications such as in anechoic chambers, transmission cables etc