869 resultados para Coding Error Isolation
Resumo:
Of 33 phages isolated from various shrimp farms in Kerala, India, six were segregated to have broad spectrum lytic efficiency towards 87 isolates of Vibrio harveyi with cross-infecting potential to a few other important aquaculture pathogens. They were further tested on beneficial aquaculture micro-organisms such as probiotics and nitrifying bacterial consortia and proved to be noninfective. Morphological characterization by transmission electron microscopy (TEM) and molecular characterization by RAPD and SDS-PAGE proved them distinct and positioned under Caudovirales belonging to Myoviridae and Siphoviridae
Resumo:
Coded OFDM is a transmission technique that is used in many practical communication systems. In a coded OFDM system, source data are coded, interleaved and multiplexed for transmission over many frequency sub-channels. In a conventional coded OFDM system, the transmission power of each subcarrier is the same regardless of the channel condition. However, some subcarrier can suffer deep fading with multi-paths and the power allocated to the faded subcarrier is likely to be wasted. In this paper, we compute the FER and BER bounds of a coded OFDM system given as convex functions for a given channel coder, inter-leaver and channel response. The power optimization is shown to be a convex optimization problem that can be solved numerically with great efficiency. With the proposed power optimization scheme, near-optimum power allocation for a given coded OFDM system and channel response to minimize FER or BER under a constant transmission power constraint is obtained
Resumo:
Protease inhibitors have great demand in medicine and biotechnology. We report here the purification and characterization of a protease inhibitor isolated from mature leaf extract of Moringa oleifera that showed maximum inhibitor activity. The protease inhibitor was purified to 41.4-fold by Sephadex G75 and its molecular mass was calculated as 23,600 Da. Inhibitory activity was confirmed by dot-blot and reverse zymogram analyses. Glycine, glutamic acid, alanine, proline and aspartic acid were found as the major amino acids of the inhibitor protein. Maximal activity was recorded at pH 7 and at 40 ◦C. The inhibitor was stable over pH 5–10; and at 50 ◦C for 2 h. Thermostability was promoted by CaCl2, BSA and sucrose. Addition of Zn2+ and Mg2+, SDS, dithiothreitol and -mercaptoethanol enhanced inhibitory activity, while DMSO and H2O2 affected inhibitory activity. Modification of amino acids at the catalytic site by PMSF and DEPC led to an enhancement in the inhibitory activity. Stoichiometry of trypsin–protease inhibitor interaction was 1:1.5 and 0.6 nM of inhibitor effected 50% inhibition. The low Ki value (1.5 nM) obtained indicated scope for utilization of M. oliefera protease inhibitor against serine proteases
Resumo:
Modeling nonlinear systems using Volterra series is a century old method but practical realizations were hampered by inadequate hardware to handle the increased computational complexity stemming from its use. But interest is renewed recently, in designing and implementing filters which can model much of the polynomial nonlinearities inherent in practical systems. The key advantage in resorting to Volterra power series for this purpose is that nonlinear filters so designed can be made to work in parallel with the existing LTI systems, yielding improved performance. This paper describes the inclusion of a quadratic predictor (with nonlinearity order 2) with a linear predictor in an analog source coding system. Analog coding schemes generally ignore the source generation mechanisms but focuses on high fidelity reconstruction at the receiver. The widely used method of differential pnlse code modulation (DPCM) for speech transmission uses a linear predictor to estimate the next possible value of the input speech signal. But this linear system do not account for the inherent nonlinearities in speech signals arising out of multiple reflections in the vocal tract. So a quadratic predictor is designed and implemented in parallel with the linear predictor to yield improved mean square error performance. The augmented speech coder is tested on speech signals transmitted over an additive white gaussian noise (AWGN) channel.
Resumo:
The presence of microcalcifications in mammograms can be considered as an early indication of breast cancer. A fastfractal block coding method to model the mammograms fordetecting the presence of microcalcifications is presented in this paper. The conventional fractal image coding method takes enormous amount of time during the fractal block encoding.procedure. In the proposed method, the image is divided intoshade and non shade blocks based on the dynamic range, andonly non shade blocks are encoded using the fractal encodingtechnique. Since the number of image blocks is considerablyreduced in the matching domain search pool, a saving of97.996% of the encoding time is obtained as compared to theconventional fractal coding method, for modeling mammograms.The above developed mammograms are used for detectingmicrocalcifications and a diagnostic efficiency of 85.7% isobtained for the 28 mammograms used.
Resumo:
The present paper deals with the chemistry, isolation, separation, characterisation and stabilisation of the Marigold oleoresin and its application as a natural food colorant. Marigold (Tagetes Erecta L), an ornamental plant belonging to the composite family, has a rich source of natural antioxidant-Lutein. A natural pigment, xanthophylls offer an alternative to synthetic dyes as a food colorant, due to its non-toxicity. Chromatographic separations of saponified and unsaponified oleoresin were performed and Trans-Lutein identified as the major constituent. Well-preserved flowers exhibit a high yield of Xanthophyll content (105.19 g/Kg) in contrast to the unpreserved flower sample (54.87 g/Kg), emphasizing the significance of flower preservation in the extraction of xanthophyll. The stability and amount of xanthophyll also increased from 105.19 g/Kg to 226.88 g/Kg on saponification and subsequent purification with Ethylene Dichloride
Resumo:
The problem of using information available from one variable X to make inferenceabout another Y is classical in many physical and social sciences. In statistics this isoften done via regression analysis where mean response is used to model the data. Onestipulates the model Y = µ(X) +ɛ. Here µ(X) is the mean response at the predictor variable value X = x, and ɛ = Y - µ(X) is the error. In classical regression analysis, both (X; Y ) are observable and one then proceeds to make inference about the mean response function µ(X). In practice there are numerous examples where X is not available, but a variable Z is observed which provides an estimate of X. As an example, consider the herbicidestudy of Rudemo, et al. [3] in which a nominal measured amount Z of herbicide was applied to a plant but the actual amount absorbed by the plant X is unobservable. As another example, from Wang [5], an epidemiologist studies the severity of a lung disease, Y , among the residents in a city in relation to the amount of certain air pollutants. The amount of the air pollutants Z can be measured at certain observation stations in the city, but the actual exposure of the residents to the pollutants, X, is unobservable and may vary randomly from the Z-values. In both cases X = Z+error: This is the so called Berkson measurement error model.In more classical measurement error model one observes an unbiased estimator W of X and stipulates the relation W = X + error: An example of this model occurs when assessing effect of nutrition X on a disease. Measuring nutrition intake precisely within 24 hours is almost impossible. There are many similar examples in agricultural or medical studies, see e.g., Carroll, Ruppert and Stefanski [1] and Fuller [2], , among others. In this talk we shall address the question of fitting a parametric model to the re-gression function µ(X) in the Berkson measurement error model: Y = µ(X) + ɛ; X = Z + η; where η and ɛ are random errors with E(ɛ) = 0, X and η are d-dimensional, and Z is the observable d-dimensional r.v.
Resumo:
The present study was initiated when several massive outbreaks of Chikungunya, Dengue and Japanese Encephalitis were frequently reported across the State of Kerala. Multiple symptoms persisted among the affected individuals and the public health officials were in search of aetiological agents responsible for the out breaks and, other than clinical samples no resources were available. In this context, a study was undertaken to focus on mosquito larvae to investigate the viruses borne by them which remain silently prevalent in the environment. The study was not a group specific investigation limited to either arbovirus or enterovirus, but had a broad spectrum approach. The study encompassed the viral pathogens that could be isolated, their impact when passaged through cell lines, growth kinetics, titer of the working stocks in specific cell line, the structure by means of transmission electron microscopy(TEM), the one step growth and molecular characterization using molecular tools.
Resumo:
Cochin estuarine system is among the most productive aquatic environment along the Southwest coast of India, exhibits unique ecological features and possess greater socioeconomic relevance. Serious investigations carried out during the past decades on the hydro biogeochemical variables pointed out variations in the health and ecological functioning of this ecosystem. Characterisation of organic matter in the estuary has been attempted in many investigations. But detailed studies covering the degradation state of organic matter using molecular level approach is not attempted. The thesis entitled Provenance, Isolation and Characterisation of Organic Matter in the Cochin Estuarine Sediment-“ A Diagenetic Amino Acid Marker Scenario” is an integrated approach to evaluate the source, quantity, quality, and degradation state of the organic matter in the surface sediments of Cochin estuarine system with the combined application of bulk and molecular level tools. Sediment and water samples from nine stations situated at Cochin estuary were collected in five seasonal sampling campaigns, for the biogeochemical assessment and their distribution pattern of sedimentary organic matter. The sampling seasons were described and abbreviated as follows: April- 2009 (pre monsoon: PRM09), August-2009 (monsoon: MON09), January-2010 (post monsoon: POM09), April-2010 (pre monsoon: PRM10) and September- 2012 (monsoon: MON12). In order to evaluate the general environmental conditions of the estuary, water samples were analysed for water quality parameters, chlorophyll pigments and nutrients by standard methods. Investigations suggested the fact that hydrographical variables and nutrients in Cochin estuary supports diverse species of flora and fauna. Moreover the sedimentary variables such as pH, Eh, texture, TOC, fractions of nitrogen and phosphorous were determined to assess the general geochemical setting as well as redox status. The periodically fluctuating oxic/ anoxic conditions and texture serve as the most significant variables controlling other variables of the aquatic environment. The organic matter in estuary comprise of a complex mixture of autochthonous as well as allochthonous materials. Autochthonous input is limited or enhanced by the nutrient elements like N and P (in their various fractions), used as a tool to evaluate their bioavailability. Bulk parameter approach like biochemical composition, stoichiometric elemental ratios and stable carbon isotope ratio was also employed to assess the quality and quantity of sedimentary organic matter in the study area. Molecular level charactersation of free sugars and amino acids were carried out by liquid chromatographic techniques. Carbohydrates are the products of primary production and their occurrence in sediments as free sugars can provide information on the estuarine productivity. Amino acid biogeochemistry provided implications on the system productivity, nature of organic matter as well as degradation status of the sedimentary organic matter in the study area. The predominance of carbohydrates over protein indicated faster mineralisation of proteinaceous organic matter in sediments and the estuary behaves as a detrital trap for the accumulation of aged organic matter. The higher lipid content and LPD/CHO ratio pointed towards the better food quality that supports benthic fauna and better accumulation of lipid compounds in the sedimentary environment. Allochthonous addition of carbohydrates via terrestrial run off was responsible for the lower PRT/CHO ratio estimated in thesediments and the lower ratios also denoted a detrital heterotrophic environment. Biopolymeric carbon and the algal contribution to BPC provided important information on the better understanding the trophic state of the estuarine system and the higher values of chlorophyll-a to phaeophytin ratio indicated deposition of phytoplankton to sediment at a rapid rate. The estimated TOC/TN ratios implied the combined input of both terrestrial and autochthonous organic matter to sedimentsAmong the free sugars, depleted levels of glucose in sediments in most of the stations and abundance of mannose at station S5 was observed during the present investigation. Among aldohexoses, concentration of galactose was found to be higher in most of the stationsRelative abundance of AAs in the estuarine sediments based on seasons followed the trend: PRM09-Leucine > Phenylalanine > Argine > Lysine, MON09-Lysine > Aspartic acid > Histidine > Tyrosine > Phenylalanine, POM09-Lysine > Histadine > Phenyalanine > Leucine > Methionine > Serine > Proline > Aspartic acid, PRM10-Valine > Aspartic acid > Histidine > Phenylalanine > Serine > Proline, MON12-Lysine > Phenylalanine > Aspartic acid > Histidine > Valine > Tyrsine > MethionineThe classification of study area into three zones based on salinity was employed in the present study for the sake of simplicity and generalized interpretations. The distribution of AAs in the three zones followed the trend: Fresh water zone (S1, S2):- Phenylalanine > Lysine > Aspartic acid > Methionine > Valine ῀ Leucine > Proline > Histidine > Glycine > Serine > Glutamic acid > Tyrosine > Arginine > Alanine > Threonine > Cysteine > Isoleucine. Estuarine zone (S3, S4, S5, S6):- Lysine > Aspartic acid > Phenylalanine > Leucine > Valine > Histidine > Methionine > Tyrosine > Serine > Glutamic acid > Proline > Glycine > Arginine > Alanine > Isoleucine > Cysteine > Threonine. Riverine /Industrial zone (S7, S8, S9):- Phenylalanine > Lysine > Aspartic acid > Histidine > Serine > Arginine > Tyrosine > Leucine > Methionine > Glutamic acid > Alanine > Glycine > Cysteine > Proline > Isoleucine > Threonine > Valine. The abundance of AAs like glutamic acid, aspartic acid, isoleucine, valine, tyrosine, and phenylalanine in sediments of the study area indicated freshly derived organic matter.
Resumo:
The aim of this paper is the investigation of the error which results from the method of approximate approximations applied to functions defined on compact in- tervals, only. This method, which is based on an approximate partition of unity, was introduced by V. Mazya in 1991 and has mainly been used for functions defied on the whole space up to now. For the treatment of differential equations and boundary integral equations, however, an efficient approximation procedure on compact intervals is needed. In the present paper we apply the method of approximate approximations to functions which are defined on compact intervals. In contrast to the whole space case here a truncation error has to be controlled in addition. For the resulting total error pointwise estimates and L1-estimates are given, where all the constants are determined explicitly.
Resumo:
Information display technology is a rapidly growing research and development field. Using state-of-the-art technology, optical resolution can be increased dramatically by organic light-emitting diode - since the light emitting layer is very thin, under 100nm. The main question is what pixel size is achievable technologically? The next generation of display will considers three-dimensional image display. In 2D , one is considering vertical and horizontal resolutions. In 3D or holographic images, there is another dimension – depth. The major requirement is the high resolution horizontal dimension in order to sustain the third dimension using special lenticular glass or barrier masks, separate views for each eye. The high-resolution 3D display offers hundreds of more different views of objects or landscape. OLEDs have potential to be a key technology for information displays in the future. The display technology presented in this work promises to bring into use bright colour 3D flat panel displays in a unique way. Unlike the conventional TFT matrix, OLED displays have constant brightness and colour, independent from the viewing angle i.e. the observer's position in front of the screen. A sandwich (just 0.1 micron thick) of organic thin films between two conductors makes an OLE Display device. These special materials are named electroluminescent organic semi-conductors (or organic photoconductors (OPC )). When electrical current is applied, a bright light is emitted (electrophosphorescence) from the formed Organic Light-Emitting Diode. Usually for OLED an ITO layer is used as a transparent electrode. Such types of displays were the first for volume manufacture and only a few products are available in the market at present. The key challenges that OLED technology faces in the application areas are: producing high-quality white light achieving low manufacturing costs increasing efficiency and lifetime at high brightness. Looking towards the future, by combining OLED with specially constructed surface lenses and proper image management software it will be possible to achieve 3D images.
Resumo:
The aim of this paper is the numerical treatment of a boundary value problem for the system of Stokes' equations. For this we extend the method of approximate approximations to boundary value problems. This method was introduced by V. Maz'ya in 1991 and has been used until now for the approximation of smooth functions defined on the whole space and for the approximation of volume potentials. In the present paper we develop an approximation procedure for the solution of the interior Dirichlet problem for the system of Stokes' equations in two dimensions. The procedure is based on potential theoretical considerations in connection with a boundary integral equations method and consists of three approximation steps as follows. In a first step the unknown source density in the potential representation of the solution is replaced by approximate approximations. In a second step the decay behavior of the generating functions is used to gain a suitable approximation for the potential kernel, and in a third step Nyström's method leads to a linear algebraic system for the approximate source density. For every step a convergence analysis is established and corresponding error estimates are given.
Resumo:
Signalling off-chip requires significant current. As a result, a chip's power-supply current changes drastically during certain output-bus transitions. These current fluctuations cause a voltage drop between the chip and circuit board due to the parasitic inductance of the power-supply package leads. Digital designers often go to great lengths to reduce this "transmitted" noise. Cray, for instance, carefully balances output signals using a technique called differential signalling to guarantee a chip has constant output current. Transmitted-noise reduction costs Cray a factor of two in output pins and wires. Coding achieves similar results at smaller costs.
Resumo:
Object recognition is complicated by clutter, occlusion, and sensor error. Since pose hypotheses are based on image feature locations, these effects can lead to false negatives and positives. In a typical recognition algorithm, pose hypotheses are tested against the image, and a score is assigned to each hypothesis. We use a statistical model to determine the score distribution associated with correct and incorrect pose hypotheses, and use binary hypothesis testing techniques to distinguish between them. Using this approach we can compare algorithms and noise models, and automatically choose values for internal system thresholds to minimize the probability of making a mistake.
Resumo:
In the accounting literature, interaction or moderating effects are usually assessed by means of OLS regression and summated rating scales are constructed to reduce measurement error bias. Structural equation models and two-stage least squares regression could be used to completely eliminate this bias, but large samples are needed. Partial Least Squares are appropriate for small samples but do not correct measurement error bias. In this article, disattenuated regression is discussed as a small sample alternative and is illustrated on data of Bisbe and Otley (in press) that examine the interaction effect of innovation and style of use of budgets on performance. Sizeable differences emerge between OLS and disattenuated regression