13 resultados para parallel processing systems
em Cochin University of Science
Resumo:
Residue Number System (RNS) based Finite Impulse Response (FIR) digital filters and traditional FIR filters. This research is motivated by the importance of an efficient filter implementation for digital signal processing. The comparison is done in terms of speed and area requirement for various filter specifications. RNS based FIR filters operate more than three times faster and consumes only about 60% of the area than traditional filter when number of filter taps is more than 32. The area for RNS filter is increasing at a lesser rate than that for traditional resulting in lower power consumption. RNS is a nonweighted number system without carry propogation between different residue digits.This enables simultaneous parallel processing on all the digits resulting in high speed addition and multiplication in the RNS domain
Resumo:
This paper presents a performance analysis of reversible, fault tolerant VLSI implementations of carry select and hybrid decimal adders suitable for multi-digit BCD addition. The designs enable partial parallel processing of all digits that perform high-speed addition in decimal domain. When the number of digits is more than 25 the hybrid decimal adder can operate 5 times faster than conventional decimal adder using classical logic gates. The speed up factor of hybrid adder increases above 10 when the number of decimal digits is more than 25 for reversible logic implementation. Such highspeed decimal adders find applications in real time processors and internet-based applications. The implementations use only reversible conservative Fredkin gates, which make it suitable for VLSI circuits.
Resumo:
During 1990's the Wavelet Transform emerged as an important signal processing tool with potential applications in time-frequency analysis and non-stationary signal processing.Wavelets have gained popularity in broad range of disciplines like signal/image compression, medical diagnostics, boundary value problems, geophysical signal processing, statistical signal processing,pattern recognition,underwater acoustics etc.In 1993, G. Evangelista introduced the Pitch- synchronous Wavelet Transform, which is particularly suited for pseudo-periodic signal processing.The work presented in this thesis mainly concentrates on two interrelated topics in signal processing,viz. the Wavelet Transform based signal compression and the computation of Discrete Wavelet Transform. A new compression scheme is described in which the Pitch-Synchronous Wavelet Transform technique is combined with the popular linear Predictive Coding method for pseudo-periodic signal processing. Subsequently,A novel Parallel Multiple Subsequence structure is presented for the efficient computation of Wavelet Transform. Case studies also presented to highlight the potential applications.
Resumo:
Sharing of information with those in need of it has always been an idealistic goal of networked environments. With the proliferation of computer networks, information is so widely distributed among systems, that it is imperative to have well-organized schemes for retrieval and also discovery. This thesis attempts to investigate the problems associated with such schemes and suggests a software architecture, which is aimed towards achieving a meaningful discovery. Usage of information elements as a modelling base for efficient information discovery in distributed systems is demonstrated with the aid of a novel conceptual entity called infotron.The investigations are focused on distributed systems and their associated problems. The study was directed towards identifying suitable software architecture and incorporating the same in an environment where information growth is phenomenal and a proper mechanism for carrying out information discovery becomes feasible. An empirical study undertaken with the aid of an election database of constituencies distributed geographically, provided the insights required. This is manifested in the Election Counting and Reporting Software (ECRS) System. ECRS system is a software system, which is essentially distributed in nature designed to prepare reports to district administrators about the election counting process and to generate other miscellaneous statutory reports.Most of the distributed systems of the nature of ECRS normally will possess a "fragile architecture" which would make them amenable to collapse, with the occurrence of minor faults. This is resolved with the help of the penta-tier architecture proposed, that contained five different technologies at different tiers of the architecture.The results of experiment conducted and its analysis show that such an architecture would help to maintain different components of the software intact in an impermeable manner from any internal or external faults. The architecture thus evolved needed a mechanism to support information processing and discovery. This necessitated the introduction of the noveI concept of infotrons. Further, when a computing machine has to perform any meaningful extraction of information, it is guided by what is termed an infotron dictionary.The other empirical study was to find out which of the two prominent markup languages namely HTML and XML, is best suited for the incorporation of infotrons. A comparative study of 200 documents in HTML and XML was undertaken. The result was in favor ofXML.The concept of infotron and that of infotron dictionary, which were developed, was applied to implement an Information Discovery System (IDS). IDS is essentially, a system, that starts with the infotron(s) supplied as clue(s), and results in brewing the information required to satisfy the need of the information discoverer by utilizing the documents available at its disposal (as information space). The various components of the system and their interaction follows the penta-tier architectural model and therefore can be considered fault-tolerant. IDS is generic in nature and therefore the characteristics and the specifications were drawn up accordingly. Many subsystems interacted with multiple infotron dictionaries that were maintained in the system.In order to demonstrate the working of the IDS and to discover the information without modification of a typical Library Information System (LIS), an Information Discovery in Library Information System (lDLIS) application was developed. IDLIS is essentially a wrapper for the LIS, which maintains all the databases of the library. The purpose was to demonstrate that the functionality of a legacy system could be enhanced with the augmentation of IDS leading to information discovery service. IDLIS demonstrates IDS in action. IDLIS proves that any legacy system could be augmented with IDS effectively to provide the additional functionality of information discovery service.Possible applications of IDS and scope for further research in the field are covered.
Resumo:
Hevea latex is a natural biological liquid of very complex composition .Besides rubber hydrocarbons,it contains many proteinous and resinous substances,carbohydrates,inorganic matter,water,and others.The Dry Rubber Content (DRC) of latex varies according to season, tapping system,weather,soil conditions ,clone,age of the tree etc. The true DRC of the latex must be determined to ensure fair prices for the latex during commercial exchange.The DRC of Hevea latex is a very familiar term to all in the rubber industry.It has been the basis for incentive payments to tappers who bring in more than the daily agreed poundage of latex.It is an important parameter for rubber and latex processing industries for automation and verious decesion making processes.This thesis embodies the efforts made by me to determine the DRC of rubber latex following different analytical tools such as MIR absorption,thermal analysis.dielectric spectroscopy and NIR reflectance.The rubber industry is still Looking for a compact instrument that is accurate economical,easy to use and environment friendly.I hope the results presented in this thesis will help to realise this goal in the near future.
Resumo:
This thesis studied the impact of market orietnation on business performance, in the seafood industry, which is a unique situation wherein the firms were all 100% export oriented. The study was able to prove that in the context of the seafood indsutry, implementation of market orientation principles will lead to increase in business performance. The business performance variables were measured under two heads, namely economic performance and non-economic performance. Market orientation in Indian seafood firms was significantly and postiively related to both the performance measures. Under the non-economic performance, were the customer and employee consequences.Again market orientation was positively and significantly related to both the consequences.Thus, the implication arising from the study is this: market orientation in Indian seafood processing firms increases their business performance. The implementation of market orientation will help the seafood firms in gaining competitve advantages in exporting. This in turn will result in increased exports and the position of Indian seafood in the global market will be strengthened. It will thus become a leading player in the global fish trade. Next, the focus was on the effect of the antecedents on the market orientation of a firm. It was seen that several factors were antecedents to the adoption of market orientation principles. They include top management emphasis, conflict, centralization and reward system. It is noted that top management emphasis and support is vital to the market orientation programme. The top management needs to adopt market oriented behaviour and reinforce the need for being market oriented, for it to percolate down the line.Interdepartmental conflict is seen to affect market orientation positively. A large percent of the Indian seafood firms are traditionally family-owned companies, rather than professionally managed firms. This would result in promulgation of old ideas of management whereby, conflict was seen as a healthy exercise, which helped to build up each department's efficiency. But, this view in the long run proves to be detrimental to the firm's performance and must therefore be kept to a bare minimum, if any.Decentralisation of decision making facilitates the participation of the lower level employees and builds up their motivational levels and commitment to the firm. Thus employees are encouraged to make their own decisions, so that they can deal with customers faster and more efficiently. Reward systems help improve an employee's morale, provide encouragement and helps inculcate commitment and loyalty. It improves the employee's self worth and fulfills his need for achievement. A satisfied employee works better, produces more output and needs less supervision, and is happy, thereby reducing costs to the company for replacement and retraining, if the employee quits.Competitive intensity plays a moderating role on the market orientation business performance. Thus in times of greater competition, the relationship between market orientation and business performance grows stronger. Thus, this thesis was successful in investigating a positive relationship between business performance and market orientation.
Resumo:
Identification and Control of Non‐linear dynamical systems are challenging problems to the control engineers.The topic is equally relevant in communication,weather prediction ,bio medical systems and even in social systems,where nonlinearity is an integral part of the system behavior.Most of the real world systems are nonlinear in nature and wide applications are there for nonlinear system identification/modeling.The basic approach in analyzing the nonlinear systems is to build a model from known behavior manifest in the form of system output.The problem of modeling boils down to computing a suitably parameterized model,representing the process.The parameters of the model are adjusted to optimize a performanace function,based on error between the given process output and identified process/model output.While the linear system identification is well established with many classical approaches,most of those methods cannot be directly applied for nonlinear system identification.The problem becomes more complex if the system is completely unknown but only the output time series is available.Blind recognition problem is the direct consequence of such a situation.The thesis concentrates on such problems.Capability of Artificial Neural Networks to approximate many nonlinear input-output maps makes it predominantly suitable for building a function for the identification of nonlinear systems,where only the time series is available.The literature is rich with a variety of algorithms to train the Neural Network model.A comprehensive study of the computation of the model parameters,using the different algorithms and the comparison among them to choose the best technique is still a demanding requirement from practical system designers,which is not available in a concise form in the literature.The thesis is thus an attempt to develop and evaluate some of the well known algorithms and propose some new techniques,in the context of Blind recognition of nonlinear systems.It also attempts to establish the relative merits and demerits of the different approaches.comprehensiveness is achieved in utilizing the benefits of well known evaluation techniques from statistics. The study concludes by providing the results of implementation of the currently available and modified versions and newly introduced techniques for nonlinear blind system modeling followed by a comparison of their performance.It is expected that,such comprehensive study and the comparison process can be of great relevance in many fields including chemical,electrical,biological,financial and weather data analysis.Further the results reported would be of immense help for practical system designers and analysts in selecting the most appropriate method based on the goodness of the model for the particular context.
Resumo:
Interfacings of various subjects generate new field ofstudy and research that help in advancing human knowledge. One of the latest of such fields is Neurotechnology, which is an effective amalgamation of neuroscience, physics, biomedical engineering and computational methods. Neurotechnology provides a platform to interact physicist; neurologist and engineers to break methodology and terminology related barriers. Advancements in Computational capability, wider scope of applications in nonlinear dynamics and chaos in complex systems enhanced study of neurodynamics. However there is a need for an effective dialogue among physicists, neurologists and engineers. Application of computer based technology in the field of medicine through signal and image processing, creation of clinical databases for helping clinicians etc are widely acknowledged. Such synergic effects between widely separated disciplines may help in enhancing the effectiveness of existing diagnostic methods. One of the recent methods in this direction is analysis of electroencephalogram with the help of methods in nonlinear dynamics. This thesis is an effort to understand the functional aspects of human brain by studying electroencephalogram. The algorithms and other related methods developed in the present work can be interfaced with a digital EEG machine to unfold the information hidden in the signal. Ultimately this can be used as a diagnostic tool.
Resumo:
Animportant step in the residue number system(RNS) based signal processing is the conversion of signal into residue domain. Many implementations of this conversion have been proposed for various goals, and one of the implementations is by a direct conversion from an analogue input. A novel approach for analogue-to-residue conversion is proposed in this research using the most popular Sigma–Delta analogue-to-digital converter (SD-ADC). In this approach, the front end is the same as in traditional SD-ADC that uses Sigma–Delta (SD) modulator with appropriate dynamic range, but the filtering is doneby a filter implemented usingRNSarithmetic. Hence, the natural output of the filter is an RNS representation of the input signal. The resolution, conversion speed, hardware complexity and cost of implementation of the proposed SD based analogue-to-residue converter are compared with the existing analogue-to-residue converters based on Nyquist rate ADCs
Resumo:
This paper describes JERIM-320, a new 320-bit hash function used for ensuring message integrity and details a comparison with popular hash functions of similar design. JERIM-320 and FORK -256 operate on four parallel lines of message processing while RIPEMD-320 operates on two parallel lines. Popular hash functions like MD5 and SHA-1 use serial successive iteration for designing compression functions and hence are less secure. The parallel branches help JERIM-320 to achieve higher level of security using multiple iterations and processing on the message blocks. The focus of this work is to prove the ability of JERIM 320 in ensuring the integrity of messages to a higher degree to suit the fast growing internet applications
Resumo:
Cyber Physical systems (CPS) connect the physical world with cyber world. The events happening in the real world is enormous and most of it go unnoticed and information is lost. CPS enables to embed tiny smart devices to capture the data and send it to Internet for further processing. The entire set-up call for lots of challenges and open new research problems. This talk is a journey through the landscape of research problems in this emerging area.
Resumo:
Statistical Machine Translation (SMT) is one of the potential applications in the field of Natural Language Processing. The translation process in SMT is carried out by acquiring translation rules automatically from the parallel corpora. However, for many language pairs (e.g. Malayalam- English), they are available only in very limited quantities. Therefore, for these language pairs a huge portion of phrases encountered at run-time will be unknown. This paper focuses on methods for handling such out-of-vocabulary (OOV) words in Malayalam that cannot be translated to English using conventional phrase-based statistical machine translation systems. The OOV words in the source sentence are pre-processed to obtain the root word and its suffix. Different inflected forms of the OOV root are generated and a match is looked up for the word variants in the phrase translation table of the translation model. A Vocabulary filter is used to choose the best among the translations of these word variants by finding the unigram count. A match for the OOV suffix is also looked up in the phrase entries and the target translations are filtered out. Structuring of the filtered phrases is done and SMT translation model is extended by adding OOV with its new phrase translations. By the results of the manual evaluation done it is observed that amount of OOV words in the input has been reduced considerably
Resumo:
Modeling nonlinear systems using Volterra series is a century old method but practical realizations were hampered by inadequate hardware to handle the increased computational complexity stemming from its use. But interest is renewed recently, in designing and implementing filters which can model much of the polynomial nonlinearities inherent in practical systems. The key advantage in resorting to Volterra power series for this purpose is that nonlinear filters so designed can be made to work in parallel with the existing LTI systems, yielding improved performance. This paper describes the inclusion of a quadratic predictor (with nonlinearity order 2) with a linear predictor in an analog source coding system. Analog coding schemes generally ignore the source generation mechanisms but focuses on high fidelity reconstruction at the receiver. The widely used method of differential pnlse code modulation (DPCM) for speech transmission uses a linear predictor to estimate the next possible value of the input speech signal. But this linear system do not account for the inherent nonlinearities in speech signals arising out of multiple reflections in the vocal tract. So a quadratic predictor is designed and implemented in parallel with the linear predictor to yield improved mean square error performance. The augmented speech coder is tested on speech signals transmitted over an additive white gaussian noise (AWGN) channel.