921 resultados para systems approach
Resumo:
This thesis presents a detailed account of a cost - effective approach towards enhanced production of alkaline protease at profitable levels using different fermentation designs employing cheap agro-industrial residues. It involves the optimisation of process parameters for the production of a thermostable alkaline protease by Vibrio sp. V26 under solid state, submerged and biphasic fermentations, production of the enzyme using cell immobilisation technology and the application of the crude enzyme on the deproteinisation of crustacean waste.The present investigation suggests an economic move towards Improved production of alkaline protease at gainful altitudes employing different fermentation designs utilising inexpensive agro-industrial residues. Moreover, the use of agro-industrial and other solid waste substrates for fermentation helps to provide a substitute in conserving the already dwindling global energy resources. Another alternative for accomplishing economically feasible production is by the use of immobilisation technique. This method avoids the wasteful expense of continually growing microorganisms. The high protease producing potential of the organism under study ascertains their exploitation in the utilisation and management of wastes. However, strain improvement studies for the production of high yielding variants using mutagens or by gene transfer are required before recommending them to Industries.Industries, all over the world, have made several attempts to exploit the microbial diversity of this planet. For sustainable development, it is essential to discover, develop and defend this natural prosperity. The Industrial development of any country is critically dependent on the intellectual and financial investment in this area. The need of the hour is to harness the beneficial uses of microbes for maximum utilisation of natural resources and technological yields. Owing to the multitude of applications in a variety of industrial sectors, there has always been an increasing demand for novel producers and resources of alkaline proteases as well as for innovative methods of production at a commercial altitude. This investigation forms a humble endeavour towards this perspective and bequeaths hope and inspiration for inventions to follow.
Resumo:
Upwelling regions occupies only a small portion of the global ocean surface. However it accounts for a large fraction of the oceanic primary production as well as fishery. Therefore understanding and quantifying the upwelling is of great importance for the marine resources management. Most of the coastal upwelling zones in the Arabian Sea are wind driven uniform systems. Mesoscale studies along the southwest coast of India have shown high spatial and temporal variability in the forcing mechanism and intensity of upwelling. There exists an equatorward component of wind stress as similar to the most upwelling zones along the eastern oceanic boundaries. Therefore an offshore component of surface Ekman transport is expected throughout the year. But several studies supported with in situ evidences have revealed that the process is purely recurring on seasonal basis. The explanation merely based on local wind forcing alone is not sufficient to support the observations. So, it is assumed that upwelling along the South Eastern Arabian Sea is an effect of basin wide wind forcing rather than local wind forcing. In the present study an integrated approach has been made to understand the process of upwelling of the South Eastern Arabian Sea. The latitudinal and seasonal variations (based on Sea Surface Temperature, wind forcing, Chlorophyll a and primary production), forcing mechanisms (local wind and remote forcing) and the factors influencing the system (Arabian Sea High Saline Water, Bay of Bengal water, runoff, coastal geomorphology) are addressed herewith.
Resumo:
Modern computer systems are plagued with stability and security problems: applications lose data, web servers are hacked, and systems crash under heavy load. Many of these problems or anomalies arise from rare program behavior caused by attacks or errors. A substantial percentage of the web-based attacks are due to buffer overflows. Many methods have been devised to detect and prevent anomalous situations that arise from buffer overflows. The current state-of-art of anomaly detection systems is relatively primitive and mainly depend on static code checking to take care of buffer overflow attacks. For protection, Stack Guards and I-leap Guards are also used in wide varieties.This dissertation proposes an anomaly detection system, based on frequencies of system calls in the system call trace. System call traces represented as frequency sequences are profiled using sequence sets. A sequence set is identified by the starting sequence and frequencies of specific system calls. The deviations of the current input sequence from the corresponding normal profile in the frequency pattern of system calls is computed and expressed as an anomaly score. A simple Bayesian model is used for an accurate detection.Experimental results are reported which show that frequency of system calls represented using sequence sets, captures the normal behavior of programs under normal conditions of usage. This captured behavior allows the system to detect anomalies with a low rate of false positives. Data are presented which show that Bayesian Network on frequency variations responds effectively to induced buffer overflows. It can also help administrators to detect deviations in program flow introduced due to errors.
Resumo:
The proliferation of wireless sensor networks in a large spectrum of applications had been spurered by the rapid advances in MEMS(micro-electro mechanical systems )based sensor technology coupled with low power,Low cost digital signal processors and radio frequency circuits.A sensor network is composed of thousands of low cost and portable devices bearing large sensing computing and wireless communication capabilities. This large collection of tiny sensors can form a robust data computing and communication distributed system for automated information gathering and distributed sensing.The main attractive feature is that such a sensor network can be deployed in remote areas.Since the sensor node is battery powered,all the sensor nodes should collaborate together to form a fault tolerant network so as toprovide an efficient utilization of precious network resources like wireless channel,memory and battery capacity.The most crucial constraint is the energy consumption which has become the prime challenge for the design of long lived sensor nodes.
Resumo:
Identification and Control of Non‐linear dynamical systems are challenging problems to the control engineers.The topic is equally relevant in communication,weather prediction ,bio medical systems and even in social systems,where nonlinearity is an integral part of the system behavior.Most of the real world systems are nonlinear in nature and wide applications are there for nonlinear system identification/modeling.The basic approach in analyzing the nonlinear systems is to build a model from known behavior manifest in the form of system output.The problem of modeling boils down to computing a suitably parameterized model,representing the process.The parameters of the model are adjusted to optimize a performanace function,based on error between the given process output and identified process/model output.While the linear system identification is well established with many classical approaches,most of those methods cannot be directly applied for nonlinear system identification.The problem becomes more complex if the system is completely unknown but only the output time series is available.Blind recognition problem is the direct consequence of such a situation.The thesis concentrates on such problems.Capability of Artificial Neural Networks to approximate many nonlinear input-output maps makes it predominantly suitable for building a function for the identification of nonlinear systems,where only the time series is available.The literature is rich with a variety of algorithms to train the Neural Network model.A comprehensive study of the computation of the model parameters,using the different algorithms and the comparison among them to choose the best technique is still a demanding requirement from practical system designers,which is not available in a concise form in the literature.The thesis is thus an attempt to develop and evaluate some of the well known algorithms and propose some new techniques,in the context of Blind recognition of nonlinear systems.It also attempts to establish the relative merits and demerits of the different approaches.comprehensiveness is achieved in utilizing the benefits of well known evaluation techniques from statistics. The study concludes by providing the results of implementation of the currently available and modified versions and newly introduced techniques for nonlinear blind system modeling followed by a comparison of their performance.It is expected that,such comprehensive study and the comparison process can be of great relevance in many fields including chemical,electrical,biological,financial and weather data analysis.Further the results reported would be of immense help for practical system designers and analysts in selecting the most appropriate method based on the goodness of the model for the particular context.
Resumo:
Embedded systems are usually designed for a single or a specified set of tasks. This specificity means the system design as well as its hardware/software development can be highly optimized. Embedded software must meet the requirements such as high reliability operation on resource-constrained platforms, real time constraints and rapid development. This necessitates the adoption of static machine codes analysis tools running on a host machine for the validation and optimization of embedded system codes, which can help meet all of these goals. This could significantly augment the software quality and is still a challenging field.Embedded systems are usually designed for a single or a specified set of tasks. This specificity means the system design as well as its hardware/software development can be highly optimized. Embedded software must meet the requirements such as high reliability operation on resource-constrained platforms, real time constraints and rapid development. This necessitates the adoption of static machine codes analysis tools running on a host machine for the validation and optimization of embedded system codes, which can help meet all of these goals. This could significantly augment the software quality and is still a challenging field.Embedded systems are usually designed for a single or a specified set of tasks. This specificity means the system design as well as its hardware/software development can be highly optimized. Embedded software must meet the requirements such as high reliability operation on resource-constrained platforms, real time constraints and rapid development. This necessitates the adoption of static machine codes analysis tools running on a host machine for the validation and optimization of embedded system codes, which can help meet all of these goals. This could significantly augment the software quality and is still a challenging field.Embedded systems are usually designed for a single or a specified set of tasks. This specificity means the system design as well as its hardware/software development can be highly optimized. Embedded software must meet the requirements such as high reliability operation on resource-constrained platforms, real time constraints and rapid development. This necessitates the adoption of static machine codes analysis tools running on a host machine for the validation and optimization of embedded system codes, which can help meet all of these goals. This could significantly augment the software quality and is still a challenging field.This dissertation contributes to an architecture oriented code validation, error localization and optimization technique assisting the embedded system designer in software debugging, to make it more effective at early detection of software bugs that are otherwise hard to detect, using the static analysis of machine codes. The focus of this work is to develop methods that automatically localize faults as well as optimize the code and thus improve the debugging process as well as quality of the code.Validation is done with the help of rules of inferences formulated for the target processor. The rules govern the occurrence of illegitimate/out of place instructions and code sequences for executing the computational and integrated peripheral functions. The stipulated rules are encoded in propositional logic formulae and their compliance is tested individually in all possible execution paths of the application programs. An incorrect sequence of machine code pattern is identified using slicing techniques on the control flow graph generated from the machine code.An algorithm to assist the compiler to eliminate the redundant bank switching codes and decide on optimum data allocation to banked memory resulting in minimum number of bank switching codes in embedded system software is proposed. A relation matrix and a state transition diagram formed for the active memory bank state transition corresponding to each bank selection instruction is used for the detection of redundant codes. Instances of code redundancy based on the stipulated rules for the target processor are identified.This validation and optimization tool can be integrated to the system development environment. It is a novel approach independent of compiler/assembler, applicable to a wide range of processors once appropriate rules are formulated. Program states are identified mainly with machine code pattern, which drastically reduces the state space creation contributing to an improved state-of-the-art model checking. Though the technique described is general, the implementation is architecture oriented, and hence the feasibility study is conducted on PIC16F87X microcontrollers. The proposed tool will be very useful in steering novices towards correct use of difficult microcontroller features in developing embedded systems.
Resumo:
Present thesis has discussed the design and synthesis of polymers suitable for nonlinear optics. Most of the molecules that were studied have shown good nonlinear optical activity. The second order nonlinear optical activity of the polymers was measured experimentally by Kurtz and Perry powder technique. The thesis comprises of eight chapters.The theory of NLO phenomenon and a review about the various nonlinear optical polymers has been discussed in chapter 1. The review has provided a survey of NLO active polymeric materials with a general introduction, which included the principles and the origin of nonlinear optics, and has given emphasis to polymeric materials for nonlinear optics, including guest-host systems, side chain polymers, main chain polymers, crosslinked polymers, chiral polymers etc.Chapter 2 has discussed the stability of the metal incorporated tetrapyrrole molecules, porphyrin, chlorin and bacteriochlorin.Chapter 3 has provided the NLO properties of certain organic molecules by computational tools. The chapter is divided into four parts. The first part has described the nonlinear optical properties of chromophore (D-n-A) and bichromophore (D-n-A-A-n-D) systems, which were separated by methylene spacer, by making use of DPT and semiempirical calculations.Chapter 4: A series of polyurethanes was prepared from cardanol, a renewable resource and a waste of the cashew industry by previously designed bifunctional and multifunctional polymers using quantum theoretical approach.Chapter 5: A series of chiral polyurethanes with main chain bis azo diol groups in the polymer backbone was designed and NLO activity was predicted by ZlNDO/ CV methods.In Chapter 7, polyurethanes were first designed by computational methods and the NLO properties were predicted by correction vector method. The designed bifunctional and multifunctional polyurethanes were synthesized by varying the chiral-achiral diol compositions
Resumo:
n the recent years protection of information in digital form is becoming more important. Image and video encryption has applications in various fields including Internet communications, multimedia systems, medical imaging, Tele-medicine and military communications. During storage as well as in transmission, the multimedia information is being exposed to unauthorized entities unless otherwise adequate security measures are built around the information system. There are many kinds of security threats during the transmission of vital classified information through insecure communication channels. Various encryption schemes are available today to deal with information security issues. Data encryption is widely used to protect sensitive data against the security threat in the form of “attack on confidentiality”. Secure transmission of information through insecure communication channels also requires encryption at the sending side and decryption at the receiving side. Encryption of large text message and image takes time before they can be transmitted, causing considerable delay in successive transmission of information in real-time. In order to minimize the latency, efficient encryption algorithms are needed. An encryption procedure with adequate security and high throughput is sought in multimedia encryption applications. Traditional symmetric key block ciphers like Data Encryption Standard (DES), Advanced Encryption Standard (AES) and Escrowed Encryption Standard (EES) are not efficient when the data size is large. With the availability of fast computing tools and communication networks at relatively lower costs today, these encryption standards appear to be not as fast as one would like. High throughput encryption and decryption are becoming increasingly important in the area of high-speed networking. Fast encryption algorithms are needed in these days for high-speed secure communication of multimedia data. It has been shown that public key algorithms are not a substitute for symmetric-key algorithms. Public key algorithms are slow, whereas symmetric key algorithms generally run much faster. Also, public key systems are vulnerable to chosen plaintext attack. In this research work, a fast symmetric key encryption scheme, entitled “Matrix Array Symmetric Key (MASK) encryption” based on matrix and array manipulations has been conceived and developed. Fast conversion has been achieved with the use of matrix table look-up substitution, array based transposition and circular shift operations that are performed in the algorithm. MASK encryption is a new concept in symmetric key cryptography. It employs matrix and array manipulation technique using secret information and data values. It is a block cipher operated on plain text message (or image) blocks of 128 bits using a secret key of size 128 bits producing cipher text message (or cipher image) blocks of the same size. This cipher has two advantages over traditional ciphers. First, the encryption and decryption procedures are much simpler, and consequently, much faster. Second, the key avalanche effect produced in the ciphertext output is better than that of AES.
Resumo:
The finite-size-dependent enhancement of pairing in mesoscopic Fermi systems is studied under the assumption that the BCS approach is valid and that the two-body force is size independent. Different systems are investigated such as superconducting metallic grains and films as well as atomic nuclei. It is shown that the finite size enhancement of pairing in these systems is in part due to the presence of a surface which accounts quite well for the data of nuclei and explains a good fraction of the enhancement in Al grains.
Resumo:
Traffic Management system (TMS) comprises four major sub systems: The Network Database Management system for information to the passengers, Transit Facility Management System for service, planning, and scheduling vehicle and crews, Congestion Management System for traffic forecasting and planning, Safety Management System concerned with safety aspects of passengers and Environment. This work has opened a rather wide frame work of model structures for application on traffic. The facets of these theories are so wide that it seems impossible to present all necessary models in this work. However it could be deduced from the study that the best Traffic Management System is that whichis realistic in all aspects is easy to understand is easy to apply As it is practically difficult to device an ideal fool—proof model, the attempt here has been to make some progress-in that direction.
Resumo:
The present study deals with the different hydrogeological characteristics of the coastal region of central Kerala and a comparative analysis with corresponding hard rock terrain. The coastal regions lie in areas where the aquifer systems discharge groundwater ultimately into the sea. Groundwater development in such regions will require a precise understanding of the complex mechanism of the saline and fresh water relationship, so that the withdrawals are so regulated as to avoid situations leading to upcoming of the saline groundwater bodies as also to prevent migration of sea water ingress further inland. Coastal tracts of Kerala are formed by several drainage systems. Thick pile of semi-consolidated and consolidated sediments from Tertiary to Recent age underlies it. These sediments comprise phreatic and confined aquifer systems. The corresponding hard rock terrain is encountered with laterites and underlined by the Precambrian metamorphic rocks. Supply of water from hard rock terrain is rather limited. This may be due to the small pore size, low degree of interconnectivity and low extent of weathering of the country rocks. The groundwater storage is mostly controlled by the thickness and hydrological properties of the weathered zone and the aquifer geometry. The over exploitation of groundwater, beyond the ‘safe yield’ limit, cause undesirable effects like continuous reduction in groundwater levels, reduction in river flows, reduction in wetland surface, degradation of groundwater quality and many other environmental problems like drought, famine etc.
Resumo:
The ab initio cluster model approach has been used to study the electronic structure and magnetic coupling of KCuF3 and K2CuF4 in their various ordered polytype crystal forms. Due to a cooperative Jahn-Teller distortion these systems exhibit strong anisotropies. In particular, the magnetic properties strongly differ from those of isomorphic compounds. Hence, KCuF3 is a quasi-one-dimensional (1D) nearest neighbor Heisenberg antiferromagnet whereas K2CuF4 is the only ferromagnet among the K2MF4 series of compounds (M=Mn, Fe, Co, Ni, and Cu) behaving all as quasi-2D nearest neighbor Heisenberg systems. Different ab initio techniques are used to explore the magnetic coupling in these systems. All methods, including unrestricted Hartree-Fock, are able to explain the magnetic ordering. However, quantitative agreement with experiment is reached only when using a state-of-the-art configuration interaction approach. Finally, an analysis of the dependence of the magnetic coupling constant with respect to distortion parameters is presented.
Resumo:
The ab initio periodic unrestricted Hartree-Fock method has been applied in the investigation of the ground-state structural, electronic, and magnetic properties of the rutile-type compounds MF2 (M=Mn, Fe, Co, and Ni). All electron Gaussian basis sets have been used. The systems turn out to be large band-gap antiferromagnetic insulators; the optimized geometrical parameters are in good agreement with experiment. The calculated most stable electronic state shows an antiferromagnetic order in agreement with that resulting from neutron scattering experiments. The magnetic coupling constants between nearest-neighbor magnetic ions along the [001], [111], and [100] (or [010]) directions have been calculated using several supercells. The resulting ab initio magnetic coupling constants are reasonably satisfactory when compared with available experimental data. The importance of the Jahn-Teller effect in FeF2 and CoF2 is also discussed.
Resumo:
The present study focuses on vibrios especially Vibrio harveyi isolated from shrimp (P. monodon) larval production systems from both east and west coasts during times of mortality. A comprehensive approach has been made to work out their systematics through numerical taxonomy and group them based on RAPD profiling and to segregate the virulent from non- virulent isolates based on the presence of virulent genes as well as their phenotypic expression. The information gathered has helped to develop a simple scheme of identification based on phenotypic characters and segregate the virulent from non virulent strains of V. harveyi.
Resumo:
The source, fate and diagentic pathway of sedimentary organic matter in estuaries are difficult to delineate due to the complexity of organic matter sources, intensive physical mixing and biological processes. A combination of bulk organic matter techniques and molecular biomarkers are found to be successful in explaining organic matter dynamics in estuaries. The basic requirement for these multi-proxy approaches are (i) sources have significantly differing characteristics, (ii) there are a sufficient number of tracers to delineate all sources and (iii) organic matter degradation and processing have little, similar or predictable effects on end member characteristics. Although there have been abundant researches that have attempted to tackle difficulties related to the source and fate of organic matter in estuarine systems, our understanding remains limited or rather inconsistent regarding the Indian estuaries. Cochin estuary is the largest among many extensive estuarine systems along the southwest coast of India. It supports as much biological productivity and diversity as tropical rain forests. In this study, we have used a combination of bulk geochemical parameters and different group of molecular biomarkers to define organic matter sources and thereby identifying various biogeochemical processes acting along the salinity gradient of the Cochin estuary