22 resultados para k-Error linear complexity
em Cochin University of Science
Resumo:
Cryptosystem using linear codes was developed in 1978 by Mc-Eliece. Later in 1985 Niederreiter and others developed a modified version of cryptosystem using concepts of linear codes. But these systems were not used frequently because of its larger key size. In this study we were designing a cryptosystem using the concepts of algebraic geometric codes with smaller key size. Error detection and correction can be done efficiently by simple decoding methods using the cryptosystem developed. Approach: Algebraic geometric codes are codes, generated using curves. The cryptosystem use basic concepts of elliptic curves cryptography and generator matrix. Decrypted information takes the form of a repetition code. Due to this complexity of decoding procedure is reduced. Error detection and correction can be carried out efficiently by solving a simple system of linear equations, there by imposing the concepts of security along with error detection and correction. Results: Implementation of the algorithm is done on MATLAB and comparative analysis is also done on various parameters of the system. Attacks are common to all cryptosystems. But by securely choosing curve, field and representation of elements in field, we can overcome the attacks and a stable system can be generated. Conclusion: The algorithm defined here protects the information from an intruder and also from the error in communication channel by efficient error correction methods.
Resumo:
In a sigma-delta analog to digital (A/D) As most of the sigma-delta ADC applications require converter, the most computationally intensive block is decimation filters with linear phase characteristics, the decimation filter and its hardware implementation symmetric Finite Impulse Response (FIR) filters are may require millions of transistors. Since these widely used for implementation. But the number of FIR converters are now targeted for a portable application, filter coefficients will be quite large for implementing a a hardware efficient design is an implicit requirement. narrow band decimation filter. Implementing decimation In this effect, this paper presents a computationally filter in several stages reduces the total number of filter efficient polyphase implementation of non-recursive coefficients, and hence reduces the hardware complexity cascaded integrator comb (CIC) decimators for and power consumption [2]. Sigma-Delta Converters (SDCs). The SDCs are The first stage of decimation filter can be operating at high oversampling frequencies and hence implemented very efficiently using a cascade of integrators require large sampling rate conversions. The filtering and comb filters which do not require multiplication or and rate reduction are performed in several stages to coefficient storage. The remaining filtering is performed reduce hardware complexity and power dissipation. either in single stage or in two stages with more complex The CIC filters are widely adopted as the first stage of FIR or infinite impulse response (IIR) filters according to decimation due to its multiplier free structure. In this the requirements. The amount of passband aliasing or research, the performance of polyphase structure is imaging error can be brought within prescribed bounds by compared with the CICs using recursive and increasing the number of stages in the CIC filter. The non-recursive algorithms in terms of power, speed and width of the passband and the frequency characteristics area. This polyphase implementation offers high speed outside the passband are severely limited. So, CIC filters operation and low power consumption. The polyphase are used to make the transition between high and low implementation of 4th order CIC filter with a sampling rates. Conventional filters operating at low decimation factor of '64' and input word length of sampling rate are used to attain the required transition '4-bits' offers about 70% and 37% of power saving bandwidth and stopband attenuation. compared to the corresponding recursive and Several papers are available in literature that deals non-recursive implementations respectively. The same with different implementations of decimation filter polyphase CIC filter can operate about 7 times faster architecture for sigma-delta ADCs. Hogenauer has than the recursive and about 3.7 times faster than the described the design procedures for decimation and non-recursive CIC filters.
Resumo:
The effects of modifying blends of poly(vinyl chloride) (PVC) with linear low density polyethylene (LLDPE) by means of acrylic acid, maleic anhydride, phenolic resins and p-phenylene diamine were investigated. Modification by acrylic acid and maleic anhydride in the presence of dicumyl peroxide was found to be the most useful procedure for improving the mechanical behaviour and adhesion properties of the blend. The improvement was found to be due mainly to the grafting of the carboxylic acid to the polymer chains; grafting was found to be more effective in LLDPE/PVC blends than in pure LLDPE.
Resumo:
The Schiff base, 3-hydroxyquinoxaline-2-carboxalidine-4-aminoantipyrine, was synthesized by the condensation of 3-hydroxyquinoxaline-2-carboxaldehyde with 4-aminoantipyrine. HPLC, FT-IR and NMR spectral data revealed that the compound exists predominantly in the amide tautomeric form and exhibits both absorption and fluorescence solvatochromism, large stokes shift, two electron quasireversible redox behaviour and good thermal stability, with a glass transition temperature of 104oC. The third-order non-linear optical character was studied using open aperture Z-scan methodology employing 7 ns pulses at 532 nm. The third-order non-linear absorption coefficient, b, was 1.48 x 10-6 cm W-1 and the imaginary part of the third-order non-linear optical susceptibility, Im c(3), was 3.36 x10-10 esu. The optical limiting threshold for the compound was found to be 340 MW cm-2.
Resumo:
The Schiff base, 3-hydroxyquinoxaline-2-carboxalidine-4-aminoantipyrine, was synthesized by the condensation of 3-hydroxyquinoxaline-2-carboxaldehyde with 4-aminoantipyrine. HPLC, FT-IR and NMR spectral data revealed that the compound exists predominantly in the amide tautomeric form and exhibits both absorption and fluorescence solvatochromism, large stokes shift, two electron quasireversible redox behaviour and good thermal stability, with a glass transition temperature of 104 oC. The third-order non-linear optical character was studied using open aperture Z-scan methodology employing 7 ns pulses at 532 nm. The third-order non-linear absorption coefficient, b, was 1.48 x 10-6 cm W-1 and the imaginary part of the third-order non-linear optical susceptibility, Im c(3), was 3.36x10-10 esu. The optical limiting threshold for the compound was found to be 340 MW cm-2.
Resumo:
The effect of residual cations in rare earth metal modified faujasite–Y zeolite has been monitored using magic angle spinning NMR spectral analysis and catalytic activity studies. The second metal ions being used are Na+, K+ and Mg+. From a comparison of the spectra of different samples, it is concluded that potassium and magnesium exchange causes a greater downfield shift in the 29Si NMR peaks. Also, lanthanum exchanged samples show migration behavior from large cages to small cages, which causes the redistribution of second counter cations. It is also observed that Mg2+ causes the most effective migration of lanthanum ions due to its greater charge. The prepared systems were effectively employed for the alkylation of benzene with 1-octene in the vapor phase. From the deactivation studies it is observed that the as-exchanged zeolites possess better stability towards reaction condition over the pure HFAU zeolite.
Resumo:
A simple and inexpensive linear magnetic field sweep generating system suitable for magnetic resonance experiments is described. The circuit, utilising a modified IC bootstrap configuration, generates field sweep over a wide range of sweep durations with excellent sweep linearity.
Resumo:
Gabion faced re.taining walls are essentially semi rigid structures that can generally accommodate large lateral and vertical movements without excessive structural distress. Because of this inherent feature, they offer technical and economical advantage over the conventional concrete gravity retaining walls. Although they can be constructed either as gravity type or reinforced soil type, this work mainly deals with gabion faced reinforced earth walls as they are more suitable to larger heights. The main focus of the present investigation was the development of a viable plane strain two dimensional non linear finite element analysis code which can predict the stress - strain behaviour of gabion faced retaining walls - both gravity type and reinforced soil type. The gabion facing, backfill soil, In - situ soil and foundation soil were modelled using 20 four noded isoparametric quadrilateral elements. The confinement provided by the gabion boxes was converted into an induced apparent cohesion as per the membrane correction theory proposed by Henkel and Gilbert (1952). The mesh reinforcement was modelled using 20 two noded linear truss elements. The interactions between the soil and the mesh reinforcement as well as the facing and backfill were modelled using 20 four noded zero thickness line interface elements (Desai et al., 1974) by incorporating the nonlinear hyperbolic formulation for the tangential shear stiffness. The well known hyperbolic formulation by Ouncan and Chang (1970) was used for modelling the non - linearity of the soil matrix. The failure of soil matrix, gabion facing and the interfaces were modelled using Mohr - Coulomb failure criterion. The construction stages were also modelled.Experimental investigations were conducted on small scale model walls (both in field as well as in laboratory) to suggest an alternative fill material for the gabion faced retaining walls. The same were also used to validate the finite element programme developed as a part of the study. The studies were conducted using different types of gabion fill materials. The variation was achieved by placing coarse aggregate and quarry dust in different proportions as layers one above the other or they were mixed together in the required proportions. The deformation of the wall face was measured and the behaviour of the walls with the variation of fill materials was analysed. It was seen that 25% of the fill material in gabions can be replaced by a soft material (any locally available material) without affecting the deformation behaviour to large extents. In circumstances where deformation can be allowed to some extents, even up to 50% replacement with soft material can be possible.The developed finite element code was validated using experimental test results and other published results. Encouraged by the close comparison between the theory and experiments, an extensive and systematic parametric study was conducted, in order to gain a closer understanding of the behaviour of the system. Geometric parameters as well as material parameters were varied to understand their effect on the behaviour of the walls. The final phase of the study consisted of developing a simplified method for the design of gabion faced retaining walls. The design was based on the limit state method considering both the stability and deformation criteria. The design parameters were selected for the system and converted to dimensionless parameters. Thus the procedure for fixing the dimensions of the wall was simplified by eliminating the conventional trial and error procedure. Handy design charts were developed which would prove as a hands - on - tool to the design engineers at site. Economic studies were also conducted to prove the cost effectiveness of the structures with respect to the conventional RCC gravity walls and cost prediction models and cost breakdown ratios were proposed. The studies as a whole are expected to contribute substantially to understand the actual behaviour of gabion faced retaining wall systems with particular reference to the lateral deformations.
Resumo:
Various compositions of linear low density polyethylene(LLDPE) containing bio-filler(either starch or dextrin)of various particle sizes were prepared.The mechanical,thermal,FTIR,morphological(SEM),water absorption and melt flow(MFI) studies were carried out.Biodegradability of the compositions were determined using a shake culture flask containing amylase producing bacteria(vibrios),which were isolated from marine benthic environment and by soil burial test. The effect of low quantities of metal oxides and metal stearate as pro-oxidants in LLDPE and in the LLDPE-biofiller compositions was established by exposing the samples to ultraviolet light.The combination of bio-filler and a pro-oxidant improves the degradation of linear low density polyethylene.The maleation of LLDPE improves the compatibility of the c blend components and thepro-oxidants enhance the photodegradability of the compatibilised blends.The responsibility studies on the partially biodegradable LLDPE containing bio-fillers and pro-oxidants suggest that the blends could be repeatedly reprocessed without deterioration in mechanical properties.
Resumo:
New mathematical methods to analytically investigate linear acoustic radiation and scattering from cylindrical bodies and transducer arrays are presented. Three problems of interest involving cylinders in an infinite fluid are studied. In all the three problems, the Helmholtz equation is used to model propagation through the fluid and the beam patterns of arrays of transducers are studied. In the first problem, a method is presented to determine the omni-directional and directional far-field pressures radiated by a cylindrical transducer array in an infinite rigid cylindrical baffle. The solution to the Helmholtz equation and the displacement continuity condition at the interface between the array and the surrounding water are used to determine the pressure. The displacement of the surface of each transducer is in the direction of the normal to the array and is assumed to be uniform. Expressions are derived for the pressure radiated by a sector of the array vibrating in-phase, the entire array vibrating in-phase, and a sector of the array phase-shaded to simulate radiation from a rectangular piston. It is shown that the uniform displacement required for generating a source level of 220 dB ref. μPa @ 1m that is omni directional in the azimuthal plane is in the order of 1 micron for typical arrays. Numerical results are presented to show that there is only a small difference between the on-axis pressures radiated by phased cylindrical arrays and planar arrays. The problem is of interest because cylindrical arrays of projectors are often used to search for underwater objects. In the second problem, the errors, when using data-independent, classical, energy and split beam correlation methods, in finding the direction of arrival (DOA) of a plane acoustic wave, caused by the presence of a solid circular elastic cylindrical stiffener near a linear array of hydrophones, are investigated. Scattering from the effectively infinite cylinder is modeled using the exact axisymmetric equations of motion and the total pressures at the hydrophone locations are computed. The effect of the radius of the cylinder, a, the distance between the cylinder and the array, b, the number of hydrophones in the array, 2H, and the angle of incidence of the wave, α, on the error in finding the DOA are illustrated using numerical results. For an array that is about 30 times the wavelength and for small angles of incidence (α<10), the error in finding the DOA using the energy method is less than that using the split beam correlation method with beam steered to α; and in some cases, the error increases when b increases; and the errors in finding the DOA using the energy method and the split beam correlation method with beam steered to α vary approximately as a7 / 4 . The problem is of interest because elastic stiffeners – in nearly acoustically transparent sonar domes that are used to protect arrays of transducers – scatter waves that are incident on it and cause an error in the estimated direction of arrival of the wave. In the third problem, a high-frequency ray-acoustics method is presented and used to determine the interior pressure field when a plane wave is normally incident on a fluid cylinder embedded in another infinite fluid. The pressure field is determined by using geometrical and physical acoustics. The interior pressure is expressed as the sum of the pressures due to all rays that pass through a point. Numerical results are presented for ka = 20 to 100 where k is the acoustic wavenumber of the exterior fluid and a is the radius of the cylinder. The results are in good agreement with those obtained using field theory. The directional responses, to the plane wave, of sectors of a circular array of uniformly distributed hydrophones in the embedded cylinder are then computed. The sectors are used to simulate linear arrays with uniformly distributed normals by using delays. The directional responses are compared with the output from an array in an infinite homogenous fluid. These outputs are of interest as they are used to determine the direction of arrival of the plane wave. Numerical results are presented for a circular array with 32 hydrophones and 12 hydrophones in each sector. The problem is of interest because arrays of hydrophones are housed inside sonar domes and acoustic plane waves from distant sources are scattered by the dome filled with fresh water and cause deterioration in the performance of the array.
Resumo:
Embedded systems are usually designed for a single or a specified set of tasks. This specificity means the system design as well as its hardware/software development can be highly optimized. Embedded software must meet the requirements such as high reliability operation on resource-constrained platforms, real time constraints and rapid development. This necessitates the adoption of static machine codes analysis tools running on a host machine for the validation and optimization of embedded system codes, which can help meet all of these goals. This could significantly augment the software quality and is still a challenging field.Embedded systems are usually designed for a single or a specified set of tasks. This specificity means the system design as well as its hardware/software development can be highly optimized. Embedded software must meet the requirements such as high reliability operation on resource-constrained platforms, real time constraints and rapid development. This necessitates the adoption of static machine codes analysis tools running on a host machine for the validation and optimization of embedded system codes, which can help meet all of these goals. This could significantly augment the software quality and is still a challenging field.Embedded systems are usually designed for a single or a specified set of tasks. This specificity means the system design as well as its hardware/software development can be highly optimized. Embedded software must meet the requirements such as high reliability operation on resource-constrained platforms, real time constraints and rapid development. This necessitates the adoption of static machine codes analysis tools running on a host machine for the validation and optimization of embedded system codes, which can help meet all of these goals. This could significantly augment the software quality and is still a challenging field.Embedded systems are usually designed for a single or a specified set of tasks. This specificity means the system design as well as its hardware/software development can be highly optimized. Embedded software must meet the requirements such as high reliability operation on resource-constrained platforms, real time constraints and rapid development. This necessitates the adoption of static machine codes analysis tools running on a host machine for the validation and optimization of embedded system codes, which can help meet all of these goals. This could significantly augment the software quality and is still a challenging field.This dissertation contributes to an architecture oriented code validation, error localization and optimization technique assisting the embedded system designer in software debugging, to make it more effective at early detection of software bugs that are otherwise hard to detect, using the static analysis of machine codes. The focus of this work is to develop methods that automatically localize faults as well as optimize the code and thus improve the debugging process as well as quality of the code.Validation is done with the help of rules of inferences formulated for the target processor. The rules govern the occurrence of illegitimate/out of place instructions and code sequences for executing the computational and integrated peripheral functions. The stipulated rules are encoded in propositional logic formulae and their compliance is tested individually in all possible execution paths of the application programs. An incorrect sequence of machine code pattern is identified using slicing techniques on the control flow graph generated from the machine code.An algorithm to assist the compiler to eliminate the redundant bank switching codes and decide on optimum data allocation to banked memory resulting in minimum number of bank switching codes in embedded system software is proposed. A relation matrix and a state transition diagram formed for the active memory bank state transition corresponding to each bank selection instruction is used for the detection of redundant codes. Instances of code redundancy based on the stipulated rules for the target processor are identified.This validation and optimization tool can be integrated to the system development environment. It is a novel approach independent of compiler/assembler, applicable to a wide range of processors once appropriate rules are formulated. Program states are identified mainly with machine code pattern, which drastically reduces the state space creation contributing to an improved state-of-the-art model checking. Though the technique described is general, the implementation is architecture oriented, and hence the feasibility study is conducted on PIC16F87X microcontrollers. The proposed tool will be very useful in steering novices towards correct use of difficult microcontroller features in developing embedded systems.
Resumo:
An immense variety of problems in theoretical physics are of the non-linear type. Non~linear partial differential equations (NPDE) have almost become the rule rather than an exception in diverse branches of physics such as fluid mechanics, field theory, particle physics, statistical physics and optics, and the construction of exact solutions of these equations constitutes one of the most vigorous activities in theoretical physics today. The thesis entitled ‘Some Non-linear Problems in Theoretical Physics’ addresses various aspects of this problem at the classical level. For obtaining exact solutions we have used mathematical tools like the bilinear operator method, base equation technique and similarity method with emphasis on its group theoretical aspects. The thesis deals with certain methods of finding exact solutions of a number of non-linear partial differential equations of importance to theoretical physics. Some of these new solutions are of relevance from the applications point of view in diverse branches such as elementary particle physics, field theory, solid state physics and non-linear optics and give some insight into the stable or unstable behavior of dynamical Systems The thesis consists of six chapters.
Resumo:
Nature is full of phenomena which we call "chaotic", the weather being a prime example. What we mean by this is that we cannot predict it to any significant accuracy, either because the system is inherently complex, or because some of the governing factors are not deterministic. However, during recent years it has become clear that random behaviour can occur even in very simple systems with very few number of degrees of freedom, without any need for complexity or indeterminacy. The discovery that chaos can be generated even with the help of systems having completely deterministic rules - often models of natural phenomena - has stimulated a lo; of research interest recently. Not that this chaos has no underlying order, but it is of a subtle kind, that has taken a great deal of ingenuity to unravel. In the present thesis, the author introduce a new nonlinear model, a ‘modulated’ logistic map, and analyse it from the view point of ‘deterministic chaos‘.
Resumo:
This thesis investigated the potential use of Linear Predictive Coding in speech communication applications. A Modified Block Adaptive Predictive Coder is developed, which reduces the computational burden and complexity without sacrificing the speech quality, as compared to the conventional adaptive predictive coding (APC) system. For this, changes in the evaluation methods have been evolved. This method is as different from the usual APC system in that the difference between the true and the predicted value is not transmitted. This allows the replacement of the high order predictor in the transmitter section of a predictive coding system, by a simple delay unit, which makes the transmitter quite simple. Also, the block length used in the processing of the speech signal is adjusted relative to the pitch period of the signal being processed rather than choosing a constant length as hitherto done by other researchers. The efficiency of the newly proposed coder has been supported with results of computer simulation using real speech data. Three methods for voiced/unvoiced/silent/transition classification have been presented. The first one is based on energy, zerocrossing rate and the periodicity of the waveform. The second method uses normalised correlation coefficient as the main parameter, while the third method utilizes a pitch-dependent correlation factor. The third algorithm which gives the minimum error probability has been chosen in a later chapter to design the modified coder The thesis also presents a comparazive study beh-cm the autocorrelation and the covariance methods used in the evaluaiicn of the predictor parameters. It has been proved that the azztocorrelation method is superior to the covariance method with respect to the filter stabf-it)‘ and also in an SNR sense, though the increase in gain is only small. The Modified Block Adaptive Coder applies a switching from pitch precitzion to spectrum prediction when the speech segment changes from a voiced or transition region to an unvoiced region. The experiments cont;-:ted in coding, transmission and simulation, used speech samples from .\£=_‘ajr2_1a:r1 and English phrases. Proposal for a speaker reecgnifion syste: and a phoneme identification system has also been outlized towards the end of the thesis.