15 resultados para Tutorial on Computing
em Cochin University of Science
Resumo:
One of the fastest expanding areas of computer exploitation is in embedded systems, whose prime function is not that of computing, but which nevertheless require information processing in order to carry out their prime function. Advances in hardware technology have made multi microprocessor systems a viable alternative to uniprocessor systems in many embedded application areas. This thesis reports the results of investigations carried out on multi microprocessors oriented towards embedded applications, with a view to enhancing throughput and reliability. An ideal controller for multiprocessor operation is developed which would smoothen sharing of routines and enable more powerful and efficient code I data interchange. Results of performance evaluation are appended.A typical application scenario is presented, which calls for classifying tasks based on characteristic features that were identified. The different classes are introduced along with a partitioned storage scheme. Theoretical analysis is also given. A review of schemes available for reducing disc access time is carried out and a new scheme presented. This is found to speed up data base transactions in embedded systems. The significance of software maintenance and adaptation in such applications is highlighted. A novel scheme of prov1d1ng a maintenance folio to system firmware is presented, alongwith experimental results. Processing reliability can be enhanced if facility exists to check if a particular instruction in a stream is appropriate. Likelihood of occurrence of a particular instruction would be more prudent if number of instructions in the set is less. A new organisation is derived to form the basement for further work. Some early results that would help steer the course of the work are presented.
Resumo:
Sharing of information with those in need of it has always been an idealistic goal of networked environments. With the proliferation of computer networks, information is so widely distributed among systems, that it is imperative to have well-organized schemes for retrieval and also discovery. This thesis attempts to investigate the problems associated with such schemes and suggests a software architecture, which is aimed towards achieving a meaningful discovery. Usage of information elements as a modelling base for efficient information discovery in distributed systems is demonstrated with the aid of a novel conceptual entity called infotron.The investigations are focused on distributed systems and their associated problems. The study was directed towards identifying suitable software architecture and incorporating the same in an environment where information growth is phenomenal and a proper mechanism for carrying out information discovery becomes feasible. An empirical study undertaken with the aid of an election database of constituencies distributed geographically, provided the insights required. This is manifested in the Election Counting and Reporting Software (ECRS) System. ECRS system is a software system, which is essentially distributed in nature designed to prepare reports to district administrators about the election counting process and to generate other miscellaneous statutory reports.Most of the distributed systems of the nature of ECRS normally will possess a "fragile architecture" which would make them amenable to collapse, with the occurrence of minor faults. This is resolved with the help of the penta-tier architecture proposed, that contained five different technologies at different tiers of the architecture.The results of experiment conducted and its analysis show that such an architecture would help to maintain different components of the software intact in an impermeable manner from any internal or external faults. The architecture thus evolved needed a mechanism to support information processing and discovery. This necessitated the introduction of the noveI concept of infotrons. Further, when a computing machine has to perform any meaningful extraction of information, it is guided by what is termed an infotron dictionary.The other empirical study was to find out which of the two prominent markup languages namely HTML and XML, is best suited for the incorporation of infotrons. A comparative study of 200 documents in HTML and XML was undertaken. The result was in favor ofXML.The concept of infotron and that of infotron dictionary, which were developed, was applied to implement an Information Discovery System (IDS). IDS is essentially, a system, that starts with the infotron(s) supplied as clue(s), and results in brewing the information required to satisfy the need of the information discoverer by utilizing the documents available at its disposal (as information space). The various components of the system and their interaction follows the penta-tier architectural model and therefore can be considered fault-tolerant. IDS is generic in nature and therefore the characteristics and the specifications were drawn up accordingly. Many subsystems interacted with multiple infotron dictionaries that were maintained in the system.In order to demonstrate the working of the IDS and to discover the information without modification of a typical Library Information System (LIS), an Information Discovery in Library Information System (lDLIS) application was developed. IDLIS is essentially a wrapper for the LIS, which maintains all the databases of the library. The purpose was to demonstrate that the functionality of a legacy system could be enhanced with the augmentation of IDS leading to information discovery service. IDLIS demonstrates IDS in action. IDLIS proves that any legacy system could be augmented with IDS effectively to provide the additional functionality of information discovery service.Possible applications of IDS and scope for further research in the field are covered.
Resumo:
We present a novel approach to computing the orientation moments and rheological properties of a dilute suspension of spheroids in a simple shear flow at arbitrary Peclct number based on a generalised Langevin equation method. This method differs from the diffusion equation method which is commonly used to model similar systems in that the actual equations of motion for the orientations of the individual particles are used in the computations, instead of a solution of the diffusion equation of the system. It also differs from the method of 'Brownian dynamics simulations' in that the equations used for the simulations are deterministic differential equations even in the presence of noise, and not stochastic differential equations as in Brownian dynamics simulations. One advantage of the present approach over the Fokker-Planck equation formalism is that it employs a common strategy that can be applied across a wide range of shear and diffusion parameters. Also, since deterministic differential equations are easier to simulate than stochastic differential equations, the Langevin equation method presented in this work is more efficient and less computationally intensive than Brownian dynamics simulations.We derive the Langevin equations governing the orientations of the particles in the suspension and evolve a procedure for obtaining the equation of motion for any orientation moment. A computational technique is described for simulating the orientation moments dynamically from a set of time-averaged Langevin equations, which can be used to obtain the moments when the governing equations are harder to solve analytically. The results obtained using this method are in good agreement with those available in the literature.The above computational method is also used to investigate the effect of rotational Brownian motion on the rheology of the suspension under the action of an external force field. The force field is assumed to be either constant or periodic. In the case of con- I stant external fields earlier results in the literature are reproduced, while for the case of periodic forcing certain parametric regimes corresponding to weak Brownian diffusion are identified where the rheological parameters evolve chaotically and settle onto a low dimensional attractor. The response of the system to variations in the magnitude and orientation of the force field and strength of diffusion is also analyzed through numerical experiments. It is also demonstrated that the aperiodic behaviour exhibited by the system could not have been picked up by the diffusion equation approach as presently used in the literature.The main contributions of this work include the preparation of the basic framework for applying the Langevin method to standard flow problems, quantification of rotary Brownian effects by using the new method, the paired-moment scheme for computing the moments and its use in solving an otherwise intractable problem especially in the limit of small Brownian motion where the problem becomes singular, and a demonstration of how systems governed by a Fokker-Planck equation can be explored for possible chaotic behaviour.
Resumo:
This thesis is an outcome of the studies, carried out by the author on the Equatorial Undercurrent and the Equatorial Jet, an interesting and unique phenomenon discovered, recently, in the Indian Ocean (wyrtxi, 1973). The main objective of the thesis is to carry out a detailed investigation of the seasonal, latitudinal and longitudinal variation of the Equatorial Undercurrent in the Indian Ocean and also the Equatorial Jet, through mapping the vertical distribution of the oceanographic properties across the equator along various longitudes for all the months of an year, between SON and SOS, utilising the oceanographic data collected during the International Indian Ocean Expedition and subsequently in the equatorial Indian Ocean. As the distribution of the hydrographic properties give only a qualitative identification of the Undercurrent, a novel technique of computing the zonal flux through bivariate distribution of salinity and thermosteric anomaly introduced by Montgomery and Stroup (1962), is adopted in order to have a quantitative variation of the Equatorial Undercurrent and the Equatorial Jet. Finally, an attempt is made to give a plausible explanation of the features observed.
Resumo:
This thesis is an outcome of the investigations carried out on the development of an Artificial Neural Network (ANN) model to implement 2-D DFT at high speed. A new definition of 2-D DFT relation is presented. This new definition enables DFT computation organized in stages involving only real addition except at the final stage of computation. The number of stages is always fixed at 4. Two different strategies are proposed. 1) A visual representation of 2-D DFT coefficients. 2) A neural network approach. The visual representation scheme can be used to compute, analyze and manipulate 2D signals such as images in the frequency domain in terms of symbols derived from 2x2 DFT. This, in turn, can be represented in terms of real data. This approach can help analyze signals in the frequency domain even without computing the DFT coefficients. A hierarchical neural network model is developed to implement 2-D DFT. Presently, this model is capable of implementing 2-D DFT for a particular order N such that ((N))4 = 2. The model can be developed into one that can implement the 2-D DFT for any order N upto a set maximum limited by the hardware constraints. The reported method shows a potential in implementing the 2-D DF T in hardware as a VLSI / ASIC
Resumo:
Biometrics deals with the physiological and behavioral characteristics of an individual to establish identity. Fingerprint based authentication is the most advanced biometric authentication technology. The minutiae based fingerprint identification method offer reasonable identification rate. The feature minutiae map consists of about 70-100 minutia points and matching accuracy is dropping down while the size of database is growing up. Hence it is inevitable to make the size of the fingerprint feature code to be as smaller as possible so that identification may be much easier. In this research, a novel global singularity based fingerprint representation is proposed. Fingerprint baseline, which is the line between distal and intermediate phalangeal joint line in the fingerprint, is taken as the reference line. A polygon is formed with the singularities and the fingerprint baseline. The feature vectors are the polygonal angle, sides, area, type and the ridge counts in between the singularities. 100% recognition rate is achieved in this method. The method is compared with the conventional minutiae based recognition method in terms of computation time, receiver operator characteristics (ROC) and the feature vector length. Speech is a behavioural biometric modality and can be used for identification of a speaker. In this work, MFCC of text dependant speeches are computed and clustered using k-means algorithm. A backpropagation based Artificial Neural Network is trained to identify the clustered speech code. The performance of the neural network classifier is compared with the VQ based Euclidean minimum classifier. Biometric systems that use a single modality are usually affected by problems like noisy sensor data, non-universality and/or lack of distinctiveness of the biometric trait, unacceptable error rates, and spoof attacks. Multifinger feature level fusion based fingerprint recognition is developed and the performances are measured in terms of the ROC curve. Score level fusion of fingerprint and speech based recognition system is done and 100% accuracy is achieved for a considerable range of matching threshold
Resumo:
Microarray data analysis is one of data mining tool which is used to extract meaningful information hidden in biological data. One of the major focuses on microarray data analysis is the reconstruction of gene regulatory network that may be used to provide a broader understanding on the functioning of complex cellular systems. Since cancer is a genetic disease arising from the abnormal gene function, the identification of cancerous genes and the regulatory pathways they control will provide a better platform for understanding the tumor formation and development. The major focus of this thesis is to understand the regulation of genes responsible for the development of cancer, particularly colorectal cancer by analyzing the microarray expression data. In this thesis, four computational algorithms namely fuzzy logic algorithm, modified genetic algorithm, dynamic neural fuzzy network and Takagi Sugeno Kang-type recurrent neural fuzzy network are used to extract cancer specific gene regulatory network from plasma RNA dataset of colorectal cancer patients. Plasma RNA is highly attractive for cancer analysis since it requires a collection of small amount of blood and it can be obtained at any time in repetitive fashion allowing the analysis of disease progression and treatment response.
Resumo:
Decimal multiplication is an integral part of financial, commercial, and internet-based computations. This paper presents a novel double digit decimal multiplication (DDDM) technique that offers low latency and high throughput. This design performs two digit multiplications simultaneously in one clock cycle. Double digit fixed point decimal multipliers for 7digit, 16 digit and 34 digit are simulated using Leonardo Spectrum from Mentor Graphics Corporation using ASIC Library. The paper also presents area and delay comparisons for these fixed point multipliers on Xilinx, Altera, Actel and Quick logic FPGAs. This multiplier design can be extended to support decimal floating point multiplication for IEEE 754- 2008 standard.
Resumo:
Speech signals are one of the most important means of communication among the human beings. In this paper, a comparative study of two feature extraction techniques are carried out for recognizing speaker independent spoken isolated words. First one is a hybrid approach with Linear Predictive Coding (LPC) and Artificial Neural Networks (ANN) and the second method uses a combination of Wavelet Packet Decomposition (WPD) and Artificial Neural Networks. Voice signals are sampled directly from the microphone and then they are processed using these two techniques for extracting the features. Words from Malayalam, one of the four major Dravidian languages of southern India are chosen for recognition. Training, testing and pattern recognition are performed using Artificial Neural Networks. Back propagation method is used to train the ANN. The proposed method is implemented for 50 speakers uttering 20 isolated words each. Both the methods produce good recognition accuracy. But Wavelet Packet Decomposition is found to be more suitable for recognizing speech because of its multi-resolution characteristics and efficient time frequency localizations
Resumo:
Due to the advancement in mobile devices and wireless networks mobile cloud computing, which combines mobile computing and cloud computing has gained momentum since 2009. The characteristics of mobile devices and wireless network makes the implementation of mobile cloud computing more complicated than for fixed clouds. This section lists some of the major issues in Mobile Cloud Computing. One of the key issues in mobile cloud computing is the end to end delay in servicing a request. Data caching is one of the techniques widely used in wired and wireless networks to improve data access efficiency. In this paper we explore the possibility of a cooperative caching approach to enhance data access efficiency in mobile cloud computing. The proposed approach is based on cloudlets, one of the architecture designed for mobile cloud computing.
Resumo:
On-line handwriting recognition has been a frontier area of research for the last few decades under the purview of pattern recognition. Word processing turns to be a vexing experience even if it is with the assistance of an alphanumeric keyboard in Indian languages. A natural solution for this problem is offered through online character recognition. There is abundant literature on the handwriting recognition of western, Chinese and Japanese scripts, but there are very few related to the recognition of Indic script such as Malayalam. This paper presents an efficient Online Handwritten character Recognition System for Malayalam Characters (OHR-M) using K-NN algorithm. It would help in recognizing Malayalam text entered using pen-like devices. A novel feature extraction method, a combination of time domain features and dynamic representation of writing direction along with its curvature is used for recognizing Malayalam characters. This writer independent system gives an excellent accuracy of 98.125% with recognition time of 15-30 milliseconds
Resumo:
In today's complicated computing environment, managing data has become the primary concern of all industries. Information security is the greatest challenge and it has become essential to secure the enterprise system resources like the databases and the operating systems from the attacks of the unknown outsiders. Our approach plays a major role in detecting and managing vulnerabilities in complex computing systems. It allows enterprises to assess two primary tiers through a single interface as a vulnerability scanner tool which provides a secure system which is also compatible with the security compliance of the industry. It provides an overall view of the vulnerabilities in the database, by automatically scanning them with minimum overhead. It gives a detailed view of the risks involved and their corresponding ratings. Based on these priorities, an appropriate mitigation process can be implemented to ensure a secured system. The results show that our approach could effectively optimize the time and cost involved when compared to the existing systems
Resumo:
This paper presents an efficient Online Handwritten character Recognition System for Malayalam Characters (OHR-M) using Kohonen network. It would help in recognizing Malayalam text entered using pen-like devices. It will be more natural and efficient way for users to enter text using a pen than keyboard and mouse. To identify the difference between similar characters in Malayalam a novel feature extraction method has been adopted-a combination of context bitmap and normalized (x, y) coordinates. The system reported an accuracy of 88.75% which is writer independent with a recognition time of 15-32 milliseconds
Resumo:
Post-transcriptional gene silencing by RNA interference is mediated by small interfering RNA called siRNA. This gene silencing mechanism can be exploited therapeutically to a wide variety of disease-associated targets, especially in AIDS, neurodegenerative diseases, cholesterol and cancer on mice with the hope of extending these approaches to treat humans. Over the recent past, a significant amount of work has been undertaken to understand the gene silencing mediated by exogenous siRNA. The design of efficient exogenous siRNA sequences is challenging because of many issues related to siRNA. While designing efficient siRNA, target mRNAs must be selected such that their corresponding siRNAs are likely to be efficient against that target and unlikely to accidentally silence other transcripts due to sequence similarity. So before doing gene silencing by siRNAs, it is essential to analyze their off-target effects in addition to their inhibition efficiency against a particular target. Hence designing exogenous siRNA with good knock-down efficiency and target specificity is an area of concern to be addressed. Some methods have been developed already by considering both inhibition efficiency and off-target possibility of siRNA against agene. Out of these methods, only a few have achieved good inhibition efficiency, specificity and sensitivity. The main focus of this thesis is to develop computational methods to optimize the efficiency of siRNA in terms of “inhibition capacity and off-target possibility” against target mRNAs with improved efficacy, which may be useful in the area of gene silencing and drug design for tumor development. This study aims to investigate the currently available siRNA prediction approaches and to devise a better computational approach to tackle the problem of siRNA efficacy by inhibition capacity and off-target possibility. The strength and limitations of the available approaches are investigated and taken into consideration for making improved solution. Thus the approaches proposed in this study extend some of the good scoring previous state of the art techniques by incorporating machine learning and statistical approaches and thermodynamic features like whole stacking energy to improve the prediction accuracy, inhibition efficiency, sensitivity and specificity. Here, we propose one Support Vector Machine (SVM) model, and two Artificial Neural Network (ANN) models for siRNA efficiency prediction. In SVM model, the classification property is used to classify whether the siRNA is efficient or inefficient in silencing a target gene. The first ANNmodel, named siRNA Designer, is used for optimizing the inhibition efficiency of siRNA against target genes. The second ANN model, named Optimized siRNA Designer, OpsiD, produces efficient siRNAs with high inhibition efficiency to degrade target genes with improved sensitivity-specificity, and identifies the off-target knockdown possibility of siRNA against non-target genes. The models are trained and tested against a large data set of siRNA sequences. The validations are conducted using Pearson Correlation Coefficient, Mathews Correlation Coefficient, Receiver Operating Characteristic analysis, Accuracy of prediction, Sensitivity and Specificity. It is found that the approach, OpsiD, is capable of predicting the inhibition capacity of siRNA against a target mRNA with improved results over the state of the art techniques. Also we are able to understand the influence of whole stacking energy on efficiency of siRNA. The model is further improved by including the ability to identify the “off-target possibility” of predicted siRNA on non-target genes. Thus the proposed model, OpsiD, can predict optimized siRNA by considering both “inhibition efficiency on target genes and off-target possibility on non-target genes”, with improved inhibition efficiency, specificity and sensitivity. Since we have taken efforts to optimize the siRNA efficacy in terms of “inhibition efficiency and offtarget possibility”, we hope that the risk of “off-target effect” while doing gene silencing in various bioinformatics fields can be overcome to a great extent. These findings may provide new insights into cancer diagnosis, prognosis and therapy by gene silencing. The approach may be found useful for designing exogenous siRNA for therapeutic applications and gene silencing techniques in different areas of bioinformatics.
Resumo:
Nonlinear optics is a broad field of research and technology that encompasses subject matter in the field of Physics, Chemistry, and Engineering. It is the branch of Optics that describes the behavior of light in nonlinear media, that is, media in which the dielectric polarization P responds nonlinearly to the electric field E of the light. This nonlinearity is typically only observed at very high light intensities. This area has applications in all optical and electro optical devices used for communication, optical storage and optical computing. Many nonlinear optical effects have proved to be versatile probes for understanding basic and applied problems. Nonlinear optical devices use nonlinear dependence of refractive index or absorption coefficient on the applied field. These nonlinear optical devices are passive devices and are referred to as intelligent or smart materials owing to the fact that the sensing, processing and activating functions required for optical processes are inherent to them which are otherwise separate in dynamic devices.The large interest in nonlinear optical crystalline materials has been motivated by their potential use in the fabrication of all-optical photonic devices. Transparent crystalline materials can exhibit different kinds of optical nonlinearities which are associated with a nonlinear polarization. The choice of the most suitable crystal material for a given application is often far from trivial; it should involve the consideration of many aspects. A high nonlinearity for frequency conversion of ultra-short pulses does not help if the interaction length is strongly limited by a large group velocity mismatch and the low damage threshold limits the applicable optical intensities. Also, it can be highly desirable to use a crystal material which can be critically phasematched at room temperature. Among the different types of nonlinear crystals, metal halides and tartrates have attracted due to their importance in photonics. Metal halides like lead halides have drawn attention because they exhibit interesting features from the stand point of the electron-lattice interaction .These materials are important for their luminescent properties. Tartrate single crystals show many interesting physical properties such as ferroelectric, piezoelectric, dielectric and optical characteristics. They are used for nonlinear optical devices based on their optical transmission characteristics. Among the several tartrate compounds, Strontium tartrate, Calcium tartrate and Cadmium tartrate have received greater attention on account of their ferroelectric, nonlinear optical and spectral characteristics. The present thesis reports the linear and nonlinear aspects of these crystals and their potential applications in the field of photonics.