11 resultados para Software Design Pattern
em Cochin University of Science
Resumo:
The study was motivated by the need to understand factors that guide the software exports and competitiveness, both positively and negatively. The influence of one factor or another upon the export competitiveness is to be understood in great depth, which is necessary to find out the industry’s sustainability. India is being emulated as an example for the success strategy in software development and exports. India’s software industry is hailed as one of the globally competitive software industry in the world. The major objectives are to model the growth pattern of exports and domestic sales of software and services of India and to find out the factors influencing the growth pattern of software industry in India. The thesis compare the growth pattern of software industry of India with respect to that of Ireland and Israel and to critically of various problems faced by software industry and export in India and to model the variables of competitiveness of emerging software producing nations
Resumo:
The goal of this work was developing a query processing system using software agents. Open Agent Architecture framework is used for system development. The system supports queries in both Hindi and Malayalam; two prominent regional languages of India. Natural language processing techniques are used for meaning extraction from the plain query and information from database is given back to the user in his native language. The system architecture is designed in a structured way that it can be adapted to other regional languages of India. . This system can be effectively used in application areas like e-governance, agriculture, rural health, education, national resource planning, disaster management, information kiosks etc where people from all walks of life are involved.
Resumo:
Analog-to digital Converters (ADC) have an important impact on the overall performance of signal processing system. This research is to explore efficient techniques for the design of sigma-delta ADC,specially for multi-standard wireless tranceivers. In particular, the aim is to develop novel models and algorithms to address this problem and to implement software tools which are avle to assist the designer's decisions in the system-level exploration phase. To this end, this thesis presents a framework of techniques to design sigma-delta analog to digital converters.A2-2-2 reconfigurable sigma-delta modulator is proposed which can meet the design specifications of the three wireless communication standards namely GSM,WCDMA and WLAN. A sigma-delta modulator design tool is developed using the Graphical User Interface Development Environment (GUIDE) In MATLAB.Genetic Algorithm(GA) based search method is introduced to find the optimum value of the scaling coefficients and to maximize the dynamic range in a sigma-delta modulator.
Resumo:
Systems which employ underwater acoustic energy for observation or communication are called sonar systems. The active and passive sonars are the two types of systems used for the detection and localisation of targets in underwater. Active sonar involves the transmission of an acoustic signal which, when reflected from a target, provides the sonar receiver with a basis for the detection and estimation. Passive sonar bases its detection and estimation on sounds which emanate from the target itself--Machinery noise, flow noise, transmission from its own active sonar etc.Electroacoustic transducers are used in sonar systems for the transmission and detection of acoustic energy. The transducer which is used for the transmission of acoustic energy is called projector and the one used for reception is called hydrophone. Since a single transducer is not sufficient enough for long range and directional transmission, a properly distributed array of transducers are to be used [9-11].The need and requirement for spatial processing to generate the most favourable directivity patterns for transducer systems used in underwater applications have already been analysed by several investigators [12-21].The desired directivity pattern can be either generated by the use of suitable focussing techniques or by an array of non-directional sensor elements, whose arrangements, spacing and the mode of excitation provide the required radiation pattern or by the combination of these.While computing that the directivity pattern, it is assumed strength of the elements are unaffected by the the source acoustic pressure at each source. However, in closely packed a r r a y s , the acoustic interaction effects experienced among the elements will modify the behaviour of individual elements and in turn will reduce the acoust ic source leve 1 wi t h respect to the maximum t heoret i cal va 1ue a s well as degrade the beam pa t tern. Th i s ef fect shou 1d be reduced in systems that are intended to generate high acoustic power output and unperturbed beam patterns [2,22-31].The work herein presented includes an approach for designing efficient and well behaved underwater transd~cer arrays, taking into account the acoustic interaction effect experienced among the closely packed multielement arrays.Architectural modifications reducing the interaction effect different radiating apertures.
Resumo:
Embedded systems are usually designed for a single or a specified set of tasks. This specificity means the system design as well as its hardware/software development can be highly optimized. Embedded software must meet the requirements such as high reliability operation on resource-constrained platforms, real time constraints and rapid development. This necessitates the adoption of static machine codes analysis tools running on a host machine for the validation and optimization of embedded system codes, which can help meet all of these goals. This could significantly augment the software quality and is still a challenging field.Embedded systems are usually designed for a single or a specified set of tasks. This specificity means the system design as well as its hardware/software development can be highly optimized. Embedded software must meet the requirements such as high reliability operation on resource-constrained platforms, real time constraints and rapid development. This necessitates the adoption of static machine codes analysis tools running on a host machine for the validation and optimization of embedded system codes, which can help meet all of these goals. This could significantly augment the software quality and is still a challenging field.Embedded systems are usually designed for a single or a specified set of tasks. This specificity means the system design as well as its hardware/software development can be highly optimized. Embedded software must meet the requirements such as high reliability operation on resource-constrained platforms, real time constraints and rapid development. This necessitates the adoption of static machine codes analysis tools running on a host machine for the validation and optimization of embedded system codes, which can help meet all of these goals. This could significantly augment the software quality and is still a challenging field.Embedded systems are usually designed for a single or a specified set of tasks. This specificity means the system design as well as its hardware/software development can be highly optimized. Embedded software must meet the requirements such as high reliability operation on resource-constrained platforms, real time constraints and rapid development. This necessitates the adoption of static machine codes analysis tools running on a host machine for the validation and optimization of embedded system codes, which can help meet all of these goals. This could significantly augment the software quality and is still a challenging field.This dissertation contributes to an architecture oriented code validation, error localization and optimization technique assisting the embedded system designer in software debugging, to make it more effective at early detection of software bugs that are otherwise hard to detect, using the static analysis of machine codes. The focus of this work is to develop methods that automatically localize faults as well as optimize the code and thus improve the debugging process as well as quality of the code.Validation is done with the help of rules of inferences formulated for the target processor. The rules govern the occurrence of illegitimate/out of place instructions and code sequences for executing the computational and integrated peripheral functions. The stipulated rules are encoded in propositional logic formulae and their compliance is tested individually in all possible execution paths of the application programs. An incorrect sequence of machine code pattern is identified using slicing techniques on the control flow graph generated from the machine code.An algorithm to assist the compiler to eliminate the redundant bank switching codes and decide on optimum data allocation to banked memory resulting in minimum number of bank switching codes in embedded system software is proposed. A relation matrix and a state transition diagram formed for the active memory bank state transition corresponding to each bank selection instruction is used for the detection of redundant codes. Instances of code redundancy based on the stipulated rules for the target processor are identified.This validation and optimization tool can be integrated to the system development environment. It is a novel approach independent of compiler/assembler, applicable to a wide range of processors once appropriate rules are formulated. Program states are identified mainly with machine code pattern, which drastically reduces the state space creation contributing to an improved state-of-the-art model checking. Though the technique described is general, the implementation is architecture oriented, and hence the feasibility study is conducted on PIC16F87X microcontrollers. The proposed tool will be very useful in steering novices towards correct use of difficult microcontroller features in developing embedded systems.
Resumo:
Most of the commercial and financial data are stored in decimal fonn. Recently, support for decimal arithmetic has received increased attention due to the growing importance in financial analysis, banking, tax calculation, currency conversion, insurance, telephone billing and accounting. Performing decimal arithmetic with systems that do not support decimal computations may give a result with representation error, conversion error, and/or rounding error. In this world of precision, such errors are no more tolerable. The errors can be eliminated and better accuracy can be achieved if decimal computations are done using Decimal Floating Point (DFP) units. But the floating-point arithmetic units in today's general-purpose microprocessors are based on the binary number system, and the decimal computations are done using binary arithmetic. Only few common decimal numbers can be exactly represented in Binary Floating Point (BF P). ln many; cases, the law requires that results generated from financial calculations performed on a computer should exactly match with manual calculations. Currently many applications involving fractional decimal data perform decimal computations either in software or with a combination of software and hardware. The performance can be dramatically improved by complete hardware DFP units and this leads to the design of processors that include DF P hardware.VLSI implementations using same modular building blocks can decrease system design and manufacturing cost. A multiplexer realization is a natural choice from the viewpoint of cost and speed.This thesis focuses on the design and synthesis of efficient decimal MAC (Multiply ACeumulate) architecture for high speed decimal processors based on IEEE Standard for Floating-point Arithmetic (IEEE 754-2008). The research goal is to design and synthesize deeimal'MAC architectures to achieve higher performance.Efficient design methods and architectures are developed for a high performance DFP MAC unit as part of this research.
Resumo:
Data mining is one of the hottest research areas nowadays as it has got wide variety of applications in common man’s life to make the world a better place to live. It is all about finding interesting hidden patterns in a huge history data base. As an example, from a sales data base, one can find an interesting pattern like “people who buy magazines tend to buy news papers also” using data mining. Now in the sales point of view the advantage is that one can place these things together in the shop to increase sales. In this research work, data mining is effectively applied to a domain called placement chance prediction, since taking wise career decision is so crucial for anybody for sure. In India technical manpower analysis is carried out by an organization named National Technical Manpower Information System (NTMIS), established in 1983-84 by India's Ministry of Education & Culture. The NTMIS comprises of a lead centre in the IAMR, New Delhi, and 21 nodal centres located at different parts of the country. The Kerala State Nodal Centre is located at Cochin University of Science and Technology. In Nodal Centre, they collect placement information by sending postal questionnaire to passed out students on a regular basis. From this raw data available in the nodal centre, a history data base was prepared. Each record in this data base includes entrance rank ranges, reservation, Sector, Sex, and a particular engineering. From each such combination of attributes from the history data base of student records, corresponding placement chances is computed and stored in the history data base. From this data, various popular data mining models are built and tested. These models can be used to predict the most suitable branch for a particular new student with one of the above combination of criteria. Also a detailed performance comparison of the various data mining models is done.This research work proposes to use a combination of data mining models namely a hybrid stacking ensemble for better predictions. A strategy to predict the overall absorption rate for various branches as well as the time it takes for all the students of a particular branch to get placed etc are also proposed. Finally, this research work puts forward a new data mining algorithm namely C 4.5 * stat for numeric data sets which has been proved to have competent accuracy over standard benchmarking data sets called UCI data sets. It also proposes an optimization strategy called parameter tuning to improve the standard C 4.5 algorithm. As a summary this research work passes through all four dimensions for a typical data mining research work, namely application to a domain, development of classifier models, optimization and ensemble methods.
Resumo:
With the recent progress and rapid increase in the field of communication, the designs of antennas for small mobile terminals with enhanced radiation characteristics are acquiring great importance. Compactness, efficiency, high data rate capacity etc. are the major criteria for the new generation antennas. The challenging task of the microwave scientists and engineers is to design a compact printed radiating structure having broadband behavior along with good efficiency and enhanced gain. Printed antenna technology has received popularity among antenna scientists after the introduction of planar transmission lines in mid-seventies. When we view the antenna through a transmission line concept, the mechanism behind any electromagnetic radiator is quite simple and interesting. Any electromagnetic system with a discontinuity is radiating electromagnetic energy. The size, shape and orientation of the discontinuities control the radiation characteristics of the system such as radiation pattern, gain, polarization etc. It can be either resonant or non-resonant. This thesis deals with antennas that are developed from a class of transmission lines known as coplanar strip-CPS, a planar analogy of parallel pair transmission line. The specialty of CPS is its symmetric structure compared to other transmission lines, which makes the antenna structures developed from CPS quite simple for design and fabrication. The structural modifications on either metallic strip of CPS results in different antennas. The first part of the thesis discusses a single band and dual band design derived from open ended slot lines which are very much suitable for 2.4 and 5.2 GHz WLAN applications. The second section of the study is vectored into the development of enhanced gain dipoles. A single band dipole and a wide band enhanced gain dipole suitable for 5.2/5.8 GHZ band and imaging applications are developed and discussed. Last part of the thesis discusses the development of directional UWBs. Three different types of ultra-compact UWBs are developed and almost all the frequency domain and time domain analysis of the structures are discussed.
Resumo:
A marine isolate of jáÅêçÅçÅÅìë MCCB 104 has been identified as an aquaculture probiotic antagonistic to sáÄêáç. In the present study different carbon and nitrogen sources and growth factors in a mineral base medium were optimized for enhanced biomass production and antagonistic activity against the target pathogen, sáÄêáç=Ü~êîÉóá, following response surface methodology (RSM). Accordingly the minimum and maximum limits of the selected variables were determined and a set of fifty experiments programmed employing central composite design (CCD) of RSM for the final optimization. The response surface plots of biomass showed similar pattern with that of antagonistic activity, which indicated a strong correlation between the biomass and antagonism. The optimum concentration of the carbon sources, nitrogen sources, and growth factors for both biomass and antagonistic activity were glucose (17.4 g/L), lactose (17 g/L), sodium chloride (16.9 g/L), ammonium chloride (3.3 g/L), and mineral salts solution (18.3 mL/L). © KSBB
Resumo:
The purpose of this paper is to describe the design and development of a digital library at Cochin University of Science and Technology (CUSAT), India, using DSpace open source software. The study covers the structure, contents and usage of CUSAT digital library. Design/methodology/approach – This paper examines the possibilities of applying open source in libraries. An evaluative approach is carried out to explore the features of the CUSAT digital library. The Google Analytics service is employed to measure the amount of use of digital library by users across the world. Findings – CUSAT has successfully applied DSpace open source software for building a digital library. The digital library has had visits from 78 countries, with the major share from India. The distribution of documents in the digital library is uneven. Past exam question papers share the major part of the collection. The number of research papers, articles and rare documents is less. Originality/value – The study is the first of its type that tries to understand digital library design and development using DSpace open source software in a university environment with a focus on the analysis of distribution of items and measuring the value by usage statistics employing the Google Analytics service. The digital library model can be useful for designing similar systems
Resumo:
The basic concepts of digital signal processing are taught to the students in engineering and science. The focus of the course is on linear, time invariant systems. The question as to what happens when the system is governed by a quadratic or cubic equation remains unanswered in the vast majority of literature on signal processing. Light has been shed on this problem when John V Mathews and Giovanni L Sicuranza published the book Polynomial Signal Processing. This book opened up an unseen vista of polynomial systems for signal and image processing. The book presented the theory and implementations of both adaptive and non-adaptive FIR and IIR quadratic systems which offer improved performance than conventional linear systems. The theory of quadratic systems presents a pristine and virgin area of research that offers computationally intensive work. Once the area of research is selected, the next issue is the choice of the software tool to carry out the work. Conventional languages like C and C++ are easily eliminated as they are not interpreted and lack good quality plotting libraries. MATLAB is proved to be very slow and so do SCILAB and Octave. The search for a language for scientific computing that was as fast as C, but with a good quality plotting library, ended up in Python, a distant relative of LISP. It proved to be ideal for scientific computing. An account of the use of Python, its scientific computing package scipy and the plotting library pylab is given in the appendix Initially, work is focused on designing predictors that exploit the polynomial nonlinearities inherent in speech generation mechanisms. Soon, the work got diverted into medical image processing which offered more potential to exploit by the use of quadratic methods. The major focus in this area is on quadratic edge detection methods for retinal images and fingerprints as well as de-noising raw MRI signals