874 resultados para nonsmooth optimization
Resumo:
The proliferation of wireless sensor networks in a large spectrum of applications had been spurered by the rapid advances in MEMS(micro-electro mechanical systems )based sensor technology coupled with low power,Low cost digital signal processors and radio frequency circuits.A sensor network is composed of thousands of low cost and portable devices bearing large sensing computing and wireless communication capabilities. This large collection of tiny sensors can form a robust data computing and communication distributed system for automated information gathering and distributed sensing.The main attractive feature is that such a sensor network can be deployed in remote areas.Since the sensor node is battery powered,all the sensor nodes should collaborate together to form a fault tolerant network so as toprovide an efficient utilization of precious network resources like wireless channel,memory and battery capacity.The most crucial constraint is the energy consumption which has become the prime challenge for the design of long lived sensor nodes.
Resumo:
Faculty of Marine Sciences,Cochin University of Science and Technology
Resumo:
Embedded systems are usually designed for a single or a specified set of tasks. This specificity means the system design as well as its hardware/software development can be highly optimized. Embedded software must meet the requirements such as high reliability operation on resource-constrained platforms, real time constraints and rapid development. This necessitates the adoption of static machine codes analysis tools running on a host machine for the validation and optimization of embedded system codes, which can help meet all of these goals. This could significantly augment the software quality and is still a challenging field.Embedded systems are usually designed for a single or a specified set of tasks. This specificity means the system design as well as its hardware/software development can be highly optimized. Embedded software must meet the requirements such as high reliability operation on resource-constrained platforms, real time constraints and rapid development. This necessitates the adoption of static machine codes analysis tools running on a host machine for the validation and optimization of embedded system codes, which can help meet all of these goals. This could significantly augment the software quality and is still a challenging field.Embedded systems are usually designed for a single or a specified set of tasks. This specificity means the system design as well as its hardware/software development can be highly optimized. Embedded software must meet the requirements such as high reliability operation on resource-constrained platforms, real time constraints and rapid development. This necessitates the adoption of static machine codes analysis tools running on a host machine for the validation and optimization of embedded system codes, which can help meet all of these goals. This could significantly augment the software quality and is still a challenging field.Embedded systems are usually designed for a single or a specified set of tasks. This specificity means the system design as well as its hardware/software development can be highly optimized. Embedded software must meet the requirements such as high reliability operation on resource-constrained platforms, real time constraints and rapid development. This necessitates the adoption of static machine codes analysis tools running on a host machine for the validation and optimization of embedded system codes, which can help meet all of these goals. This could significantly augment the software quality and is still a challenging field.This dissertation contributes to an architecture oriented code validation, error localization and optimization technique assisting the embedded system designer in software debugging, to make it more effective at early detection of software bugs that are otherwise hard to detect, using the static analysis of machine codes. The focus of this work is to develop methods that automatically localize faults as well as optimize the code and thus improve the debugging process as well as quality of the code.Validation is done with the help of rules of inferences formulated for the target processor. The rules govern the occurrence of illegitimate/out of place instructions and code sequences for executing the computational and integrated peripheral functions. The stipulated rules are encoded in propositional logic formulae and their compliance is tested individually in all possible execution paths of the application programs. An incorrect sequence of machine code pattern is identified using slicing techniques on the control flow graph generated from the machine code.An algorithm to assist the compiler to eliminate the redundant bank switching codes and decide on optimum data allocation to banked memory resulting in minimum number of bank switching codes in embedded system software is proposed. A relation matrix and a state transition diagram formed for the active memory bank state transition corresponding to each bank selection instruction is used for the detection of redundant codes. Instances of code redundancy based on the stipulated rules for the target processor are identified.This validation and optimization tool can be integrated to the system development environment. It is a novel approach independent of compiler/assembler, applicable to a wide range of processors once appropriate rules are formulated. Program states are identified mainly with machine code pattern, which drastically reduces the state space creation contributing to an improved state-of-the-art model checking. Though the technique described is general, the implementation is architecture oriented, and hence the feasibility study is conducted on PIC16F87X microcontrollers. The proposed tool will be very useful in steering novices towards correct use of difficult microcontroller features in developing embedded systems.
Resumo:
In the early 19th century, industrial revolution was fuelled mainly by the development of machine based manufacturing and the increased use of coal. Later on, the focal point shifted to oil, thanks to the mass-production technology, ease of transport/storage and also the (less) environmental issues in comparison with the coal!! By the dawn of 21st century, due to the depletion of oil reserves and pollution resulting from heavy usage of oil the demand for clean energy was on the rising edge. This ever growing demand has propelled research on photovoltaics which has emerged successful and is currently being looked up to as the only solace for meeting our present day energy requirements. The proven PV technology on commercial scale is based on silicon but the recent boom in the demand for photovoltaic modules has in turn created a shortage in supply of silicon. Also the technology is still not accessible to common man. This has onset the research and development work on moderately efficient, eco-friendly and low cost photovoltaic devices (solar cells). Thin film photovoltaic modules have made a breakthrough entry in the PV market on these grounds. Thin films have the potential to revolutionize the present cost structure of solar cells by eliminating the use of the expensive silicon wafers that alone accounts for above 50% of total module manufacturing cost.Well developed thin film photovoltaic technologies are based on amorphous silicon, CdTe and CuInSe2. However the cell fabrication process using amorphous silicon requires handling of very toxic gases (like phosphene, silane and borane) and costly technologies for cell fabrication. In the case of other materials too, there are difficulties like maintaining stoichiometry (especially in large area films), alleged environmental hazards and high cost of indium. Hence there is an urgent need for the development of materials that are easy to prepare, eco-friendly and available in abundance. The work presented in this thesis is an attempt towards the development of a cost-effective, eco-friendly material for thin film solar cells using simple economically viable technique. Sn-based window and absorber layers deposited using Chemical Spray Pyrolysis (CSP) technique have been chosen for the purpose
Resumo:
Aim: To develop a new medium for enhanced production of biomass of an aquaculture probiotic Pseudomonas MCCB 103 and its antagonistic phenazine compound, pyocyanin. Methods and Results: Carbon and nitrogen sources and growth factors, such as amino acids and vitamins, were screened initially in a mineral medium for the biomass and antagonistic compound of Pseudomonas MCCB 103. The selected ingredients were further optimized using a full-factorial central composite design of the response surface methodology. The medium optimized as per the model for biomass contained mannitol (20 g l)1), glycerol (20 g l)1), sodium chloride (5 g l)1), urea (3Æ3 g l)1) and mineral salts solution (20 ml l)1), and the one optimized for the antagonistic compound contained mannitol (2 g l)1), glycerol (20 g l)1), sodium chloride (5Æ1 g l)1), urea (3Æ6 g l)1) and mineral salts solution (20 ml l)1). Subsequently, the model was validated experimentally with a biomass increase by 19% and fivefold increase of the antagonistic compound. Conclusion: Significant increase in the biomass and antagonistic compound production could be obtained in the new media. Significance and Impact of the Study: Media formulation and optimization are the primary steps involved in bioprocess technology, an attempt not made so far in the production of aquaculture probiotics
Resumo:
A marine isolate of jáÅêçÅçÅÅìë MCCB 104 has been identified as an aquaculture probiotic antagonistic to sáÄêáç. In the present study different carbon and nitrogen sources and growth factors in a mineral base medium were optimized for enhanced biomass production and antagonistic activity against the target pathogen, sáÄêáç=Ü~êîÉóá, following response surface methodology (RSM). Accordingly the minimum and maximum limits of the selected variables were determined and a set of fifty experiments programmed employing central composite design (CCD) of RSM for the final optimization. The response surface plots of biomass showed similar pattern with that of antagonistic activity, which indicated a strong correlation between the biomass and antagonism. The optimum concentration of the carbon sources, nitrogen sources, and growth factors for both biomass and antagonistic activity were glucose (17.4 g/L), lactose (17 g/L), sodium chloride (16.9 g/L), ammonium chloride (3.3 g/L), and mineral salts solution (18.3 mL/L). © KSBB
Resumo:
Coded OFDM is a transmission technique that is used in many practical communication systems. In a coded OFDM system, source data are coded, interleaved and multiplexed for transmission over many frequency sub-channels. In a conventional coded OFDM system, the transmission power of each subcarrier is the same regardless of the channel condition. However, some subcarrier can suffer deep fading with multi-paths and the power allocated to the faded subcarrier is likely to be wasted. In this paper, we compute the FER and BER bounds of a coded OFDM system given as convex functions for a given channel coder, inter-leaver and channel response. The power optimization is shown to be a convex optimization problem that can be solved numerically with great efficiency. With the proposed power optimization scheme, near-optimum power allocation for a given coded OFDM system and channel response to minimize FER or BER under a constant transmission power constraint is obtained
Resumo:
This work proposes a parallel genetic algorithm for compressing scanned document images. A fitness function is designed with Hausdorff distance which determines the terminating condition. The algorithm helps to locate the text lines. A greater compression ratio has achieved with lesser distortion
Resumo:
Short term load forecasting is one of the key inputs to optimize the management of power system. Almost 60-65% of revenue expenditure of a distribution company is against power purchase. Cost of power depends on source of power. Hence any optimization strategy involves optimization in scheduling power from various sources. As the scheduling involves many technical and commercial considerations and constraints, the efficiency in scheduling depends on the accuracy of load forecast. Load forecasting is a topic much visited in research world and a number of papers using different techniques are already presented. The accuracy of forecast for the purpose of merit order dispatch decisions depends on the extent of the permissible variation in generation limits. For a system with low load factor, the peak and the off peak trough are prominent and the forecast should be able to identify these points to more accuracy rather than minimizing the error in the energy content. In this paper an attempt is made to apply Artificial Neural Network (ANN) with supervised learning based approach to make short term load forecasting for a power system with comparatively low load factor. Such power systems are usual in tropical areas with concentrated rainy season for a considerable period of the year
Resumo:
Over-sampling sigma-delta analogue-to-digital converters (ADCs) are one of the key building blocks of state of the art wireless transceivers. In the sigma-delta modulator design the scaling coefficients determine the overall signal-to-noise ratio. Therefore, selecting the optimum value of the coefficient is very important. To this end, this paper addresses the design of a fourthorder multi-bit sigma-delta modulator for Wireless Local Area Networks (WLAN) receiver with feed-forward path and the optimum coefficients are selected using genetic algorithm (GA)- based search method. In particular, the proposed converter makes use of low-distortion swing suppression SDM architecture which is highly suitable for low oversampling ratios to attain high linearity over a wide bandwidth. The focus of this paper is the identification of the best coefficients suitable for the proposed topology as well as the optimization of a set of system parameters in order to achieve the desired signal-to-noise ratio. GA-based search engine is a stochastic search method which can find the optimum solution within the given constraints.
Resumo:
SnS thin films were prepared using automated chemical spray pyrolysis (CSP) technique. Single-phase, p-type, stoichiometric, SnS films with direct band gap of 1.33 eV and having very high absorption coefficient (N105/cm) were deposited at substrate temperature of 375 °C. The role of substrate temperature in determining the optoelectronic and structural properties of SnS films was established and concentration ratios of anionic and cationic precursor solutions were optimized. n-type SnS samples were also prepared using CSP technique at the same substrate temperature of 375 °C, which facilitates sequential deposition of SnS homojunction. A comprehensive analysis of both types of films was done using x-ray diffraction, energy dispersive x-ray analysis, scanning electron microscopy, atomic force microscopy, optical absorption and electrical measurements. Deposition temperatures required for growth of other binary sulfide phases of tin such as SnS2, Sn2S3 were also determined
Resumo:
This study was undertaken to isolate ligninase-producing white-rot fungi for use in the extraction of fibre from pineapple leaf agriwaste. Fifteen fungal strains were isolated from dead tree trunks and leaf litter. Ligninolytic enzymes (lignin peroxidase (LiP), manganese peroxidase (MnP), and laccase (Lac)), were produced by solid-state fermentation (SSF) using pineapple leaves as the substrate. Of the isolated strains, the one showing maximum production of ligninolytic enzymes was identified to be Ganoderma lucidum by 18S ribotyping. Single parameter optimization and response surface methodology of different process variables were carried out for enzyme production. Incubation period, agitation, and Tween-80 were identified to be the most significant variables through Plackett-Burman design. These variables were further optimized by Box-Behnken design. The overall maximum yield of ligninolytic enzymes was achieved by experimental analysis under these optimal conditions. Quantitative lignin analysis of pineapple leaves by Klason lignin method showed significant degradation of lignin by Ganoderma lucidum under SSF
Resumo:
Marine yeast have been regarded as safe and showing a beneficial impact on biotechnological process. It provides better nutritional and dietary values indicating their potential application as feed supplements in aquaculture. Brown et al. (1996) evaluated all the marine yeasts characterised with high protein content, carbohydrate, good amino acid composition and high levels of saturated fats. However, there is paucity of information on marine yeasts as feed supplements and no feed formulation has been found either in literature or in market supplemented with them. This statement supported by Zhenming et al. (2006) reported still a lack of feed composed of single cell protein (SCP) from marine yeasts with high content of protein and other nutrients. Recent research has shown that marine yeasts also have highly potential uses in food, feed, medical and biofuel industries as well as marine biotechnology (Chi et al., 2009; 2010). Sajeevan et al. (2006; 2009a) and Sarlin and Philip (2011) demonstrates that the marine yeasts Candida sake served as a high quality, inexpensive nutrient source and it had proven immunostimulatory properties for cultured shrimps. This strain has been made part of the culture collection of National Centre for Aquatic Animal Health, Cochin University of Science and Technology as Candida MCCF 101. Over the years marine yeasts have been gaining increased attention in animal feed industry due to their nutritional value and immune boosting property.Therefore, the present study was undertaken, and focused on the nutritional quality, optimization of large scale production and evaluation of its protective effect on Koi carp from Aeromonas infection
Resumo:
A study was undertaken to isolate phytase producers from environment and to segregate the most highly efficient phytase producer and to develop a bioprocess technology for commercial application. During this process, a potential phytase producer Bacillus MCCB 242 was isolated and characterized phenotypically and genotypically. Subsequently, phytase production was optimized, the enzyme purified and characterized and an appropriate downstream process also could be standardized.Precisely, through this work an environmental isolate Bacillus MCCB 242 could be brought out as phytase producer for commercial application. The enzyme production could be optimized and characterized, and an appropriate downstream process standardized. Cytotoxicity studies revealed the enzyme safe for feed application, especially in fish.
Resumo:
Post-transcriptional gene silencing by RNA interference is mediated by small interfering RNA called siRNA. This gene silencing mechanism can be exploited therapeutically to a wide variety of disease-associated targets, especially in AIDS, neurodegenerative diseases, cholesterol and cancer on mice with the hope of extending these approaches to treat humans. Over the recent past, a significant amount of work has been undertaken to understand the gene silencing mediated by exogenous siRNA. The design of efficient exogenous siRNA sequences is challenging because of many issues related to siRNA. While designing efficient siRNA, target mRNAs must be selected such that their corresponding siRNAs are likely to be efficient against that target and unlikely to accidentally silence other transcripts due to sequence similarity. So before doing gene silencing by siRNAs, it is essential to analyze their off-target effects in addition to their inhibition efficiency against a particular target. Hence designing exogenous siRNA with good knock-down efficiency and target specificity is an area of concern to be addressed. Some methods have been developed already by considering both inhibition efficiency and off-target possibility of siRNA against agene. Out of these methods, only a few have achieved good inhibition efficiency, specificity and sensitivity. The main focus of this thesis is to develop computational methods to optimize the efficiency of siRNA in terms of “inhibition capacity and off-target possibility” against target mRNAs with improved efficacy, which may be useful in the area of gene silencing and drug design for tumor development. This study aims to investigate the currently available siRNA prediction approaches and to devise a better computational approach to tackle the problem of siRNA efficacy by inhibition capacity and off-target possibility. The strength and limitations of the available approaches are investigated and taken into consideration for making improved solution. Thus the approaches proposed in this study extend some of the good scoring previous state of the art techniques by incorporating machine learning and statistical approaches and thermodynamic features like whole stacking energy to improve the prediction accuracy, inhibition efficiency, sensitivity and specificity. Here, we propose one Support Vector Machine (SVM) model, and two Artificial Neural Network (ANN) models for siRNA efficiency prediction. In SVM model, the classification property is used to classify whether the siRNA is efficient or inefficient in silencing a target gene. The first ANNmodel, named siRNA Designer, is used for optimizing the inhibition efficiency of siRNA against target genes. The second ANN model, named Optimized siRNA Designer, OpsiD, produces efficient siRNAs with high inhibition efficiency to degrade target genes with improved sensitivity-specificity, and identifies the off-target knockdown possibility of siRNA against non-target genes. The models are trained and tested against a large data set of siRNA sequences. The validations are conducted using Pearson Correlation Coefficient, Mathews Correlation Coefficient, Receiver Operating Characteristic analysis, Accuracy of prediction, Sensitivity and Specificity. It is found that the approach, OpsiD, is capable of predicting the inhibition capacity of siRNA against a target mRNA with improved results over the state of the art techniques. Also we are able to understand the influence of whole stacking energy on efficiency of siRNA. The model is further improved by including the ability to identify the “off-target possibility” of predicted siRNA on non-target genes. Thus the proposed model, OpsiD, can predict optimized siRNA by considering both “inhibition efficiency on target genes and off-target possibility on non-target genes”, with improved inhibition efficiency, specificity and sensitivity. Since we have taken efforts to optimize the siRNA efficacy in terms of “inhibition efficiency and offtarget possibility”, we hope that the risk of “off-target effect” while doing gene silencing in various bioinformatics fields can be overcome to a great extent. These findings may provide new insights into cancer diagnosis, prognosis and therapy by gene silencing. The approach may be found useful for designing exogenous siRNA for therapeutic applications and gene silencing techniques in different areas of bioinformatics.