14 resultados para Slot-based task-splitting algorithms
em Cochin University of Science
Resumo:
Super Resolution problem is an inverse problem and refers to the process of producing a High resolution (HR) image, making use of one or more Low Resolution (LR) observations. It includes up sampling the image, thereby, increasing the maximum spatial frequency and removing degradations that arise during the image capture namely aliasing and blurring. The work presented in this thesis is based on learning based single image super-resolution. In learning based super-resolution algorithms, a training set or database of available HR images are used to construct the HR image of an image captured using a LR camera. In the training set, images are stored as patches or coefficients of feature representations like wavelet transform, DCT, etc. Single frame image super-resolution can be used in applications where database of HR images are available. The advantage of this method is that by skilfully creating a database of suitable training images, one can improve the quality of the super-resolved image. A new super resolution method based on wavelet transform is developed and it is better than conventional wavelet transform based methods and standard interpolation methods. Super-resolution techniques based on skewed anisotropic transform called directionlet transform are developed to convert a low resolution image which is of small size into a high resolution image of large size. Super-resolution algorithm not only increases the size, but also reduces the degradations occurred during the process of capturing image. This method outperforms the standard interpolation methods and the wavelet methods, both visually and in terms of SNR values. Artifacts like aliasing and ringing effects are also eliminated in this method. The super-resolution methods are implemented using, both critically sampled and over sampled directionlets. The conventional directionlet transform is computationally complex. Hence lifting scheme is used for implementation of directionlets. The new single image super-resolution method based on lifting scheme reduces computational complexity and thereby reduces computation time. The quality of the super resolved image depends on the type of wavelet basis used. A study is conducted to find the effect of different wavelets on the single image super-resolution method. Finally this new method implemented on grey images is extended to colour images and noisy images
Resumo:
Computational Biology is the research are that contributes to the analysis of biological data through the development of algorithms which will address significant research problems.The data from molecular biology includes DNA,RNA ,Protein and Gene expression data.Gene Expression Data provides the expression level of genes under different conditions.Gene expression is the process of transcribing the DNA sequence of a gene into mRNA sequences which in turn are later translated into proteins.The number of copies of mRNA produced is called the expression level of a gene.Gene expression data is organized in the form of a matrix. Rows in the matrix represent genes and columns in the matrix represent experimental conditions.Experimental conditions can be different tissue types or time points.Entries in the gene expression matrix are real values.Through the analysis of gene expression data it is possible to determine the behavioral patterns of genes such as similarity of their behavior,nature of their interaction,their respective contribution to the same pathways and so on. Similar expression patterns are exhibited by the genes participating in the same biological process.These patterns have immense relevance and application in bioinformatics and clinical research.Theses patterns are used in the medical domain for aid in more accurate diagnosis,prognosis,treatment planning.drug discovery and protein network analysis.To identify various patterns from gene expression data,data mining techniques are essential.Clustering is an important data mining technique for the analysis of gene expression data.To overcome the problems associated with clustering,biclustering is introduced.Biclustering refers to simultaneous clustering of both rows and columns of a data matrix. Clustering is a global whereas biclustering is a local model.Discovering local expression patterns is essential for identfying many genetic pathways that are not apparent otherwise.It is therefore necessary to move beyond the clustering paradigm towards developing approaches which are capable of discovering local patterns in gene expression data.A biclusters is a submatrix of the gene expression data matrix.The rows and columns in the submatrix need not be contiguous as in the gene expression data matrix.Biclusters are not disjoint.Computation of biclusters is costly because one will have to consider all the combinations of columans and rows in order to find out all the biclusters.The search space for the biclustering problem is 2 m+n where m and n are the number of genes and conditions respectively.Usually m+n is more than 3000.The biclustering problem is NP-hard.Biclustering is a powerful analytical tool for the biologist.The research reported in this thesis addresses the problem of biclustering.Ten algorithms are developed for the identification of coherent biclusters from gene expression data.All these algorithms are making use of a measure called mean squared residue to search for biclusters.The objective here is to identify the biclusters of maximum size with the mean squared residue lower than a given threshold. All these algorithms begin the search from tightly coregulated submatrices called the seeds.These seeds are generated by K-Means clustering algorithm.The algorithms developed can be classified as constraint based,greedy and metaheuristic.Constarint based algorithms uses one or more of the various constaints namely the MSR threshold and the MSR difference threshold.The greedy approach makes a locally optimal choice at each stage with the objective of finding the global optimum.In metaheuristic approaches particle Swarm Optimization(PSO) and variants of Greedy Randomized Adaptive Search Procedure(GRASP) are used for the identification of biclusters.These algorithms are implemented on the Yeast and Lymphoma datasets.Biologically relevant and statistically significant biclusters are identified by all these algorithms which are validated by Gene Ontology database.All these algorithms are compared with some other biclustering algorithms.Algorithms developed in this work overcome some of the problems associated with the already existing algorithms.With the help of some of the algorithms which are developed in this work biclusters with very high row variance,which is higher than the row variance of any other algorithm using mean squared residue, are identified from both Yeast and Lymphoma data sets.Such biclusters which make significant change in the expression level are highly relevant biologically.
Resumo:
Cancer treatment is most effective when it is detected early and the progress in treatment will be closely related to the ability to reduce the proportion of misses in the cancer detection task. The effectiveness of algorithms for detecting cancers can be greatly increased if these algorithms work synergistically with those for characterizing normal mammograms. This research work combines computerized image analysis techniques and neural networks to separate out some fraction of the normal mammograms with extremely high reliability, based on normal tissue identification and removal. The presence of clustered microcalcifications is one of the most important and sometimes the only sign of cancer on a mammogram. 60% to 70% of non-palpable breast carcinoma demonstrates microcalcifications on mammograms [44], [45], [46].WT based techniques are applied on the remaining mammograms, those are obviously abnormal, to detect possible microcalcifications. The goal of this work is to improve the detection performance and throughput of screening-mammography, thus providing a ‘second opinion ‘ to the radiologists. The state-of- the- art DWT computation algorithms are not suitable for practical applications with memory and delay constraints, as it is not a block transfonn. Hence in this work, the development of a Block DWT (BDWT) computational structure having low processing memory requirement has also been taken up.
Resumo:
The thesis introduced the octree and addressed the complete nature of problems encountered, while building and imaging system based on octrees. An efficient Bottom-up recursive algorithm and its iterative counterpart for the raster to octree conversion of CAT scan slices, to improve the speed of generating the octree from the slices, the possibility of utilizing the inherent parallesism in the conversion programme is explored in this thesis. The octree node, which stores the volume information in cube often stores the average density information could lead to “patchy”distribution of density during the image reconstruction. In an attempt to alleviate this problem and explored the possibility of using VQ to represent the imformation contained within a cube. Considering the ease of accommodating the process of compressing the information during the generation of octrees from CAT scan slices, proposed use of wavelet transforms to generate the compressed information in a cube. The modified algorithm for generating octrees from the slices is shown to accommodate the eavelet compression easily. Rendering the stored information in the form of octree is a complex task, necessarily because of the requirement to display the volumetric information. The reys traced from each cube in the octree, sum up the density en-route, accounting for the opacities and transparencies produced due to variations in density.
Resumo:
Analog-to digital Converters (ADC) have an important impact on the overall performance of signal processing system. This research is to explore efficient techniques for the design of sigma-delta ADC,specially for multi-standard wireless tranceivers. In particular, the aim is to develop novel models and algorithms to address this problem and to implement software tools which are avle to assist the designer's decisions in the system-level exploration phase. To this end, this thesis presents a framework of techniques to design sigma-delta analog to digital converters.A2-2-2 reconfigurable sigma-delta modulator is proposed which can meet the design specifications of the three wireless communication standards namely GSM,WCDMA and WLAN. A sigma-delta modulator design tool is developed using the Graphical User Interface Development Environment (GUIDE) In MATLAB.Genetic Algorithm(GA) based search method is introduced to find the optimum value of the scaling coefficients and to maximize the dynamic range in a sigma-delta modulator.
Resumo:
To ensure quality of machined products at minimum machining costs and maximum machining effectiveness, it is very important to select optimum parameters when metal cutting machine tools are employed. Traditionally, the experience of the operator plays a major role in the selection of optimum metal cutting conditions. However, attaining optimum values each time by even a skilled operator is difficult. The non-linear nature of the machining process has compelled engineers to search for more effective methods to attain optimization. The design objective preceding most engineering design activities is simply to minimize the cost of production or to maximize the production efficiency. The main aim of research work reported here is to build robust optimization algorithms by exploiting ideas that nature has to offer from its backyard and using it to solve real world optimization problems in manufacturing processes.In this thesis, after conducting an exhaustive literature review, several optimization techniques used in various manufacturing processes have been identified. The selection of optimal cutting parameters, like depth of cut, feed and speed is a very important issue for every machining process. Experiments have been designed using Taguchi technique and dry turning of SS420 has been performed on Kirlosker turn master 35 lathe. Analysis using S/N and ANOVA were performed to find the optimum level and percentage of contribution of each parameter. By using S/N analysis the optimum machining parameters from the experimentation is obtained.Optimization algorithms begin with one or more design solutions supplied by the user and then iteratively check new design solutions, relative search spaces in order to achieve the true optimum solution. A mathematical model has been developed using response surface analysis for surface roughness and the model was validated using published results from literature.Methodologies in optimization such as Simulated annealing (SA), Particle Swarm Optimization (PSO), Conventional Genetic Algorithm (CGA) and Improved Genetic Algorithm (IGA) are applied to optimize machining parameters while dry turning of SS420 material. All the above algorithms were tested for their efficiency, robustness and accuracy and observe how they often outperform conventional optimization method applied to difficult real world problems. The SA, PSO, CGA and IGA codes were developed using MATLAB. For each evolutionary algorithmic method, optimum cutting conditions are provided to achieve better surface finish.The computational results using SA clearly demonstrated that the proposed solution procedure is quite capable in solving such complicated problems effectively and efficiently. Particle Swarm Optimization (PSO) is a relatively recent heuristic search method whose mechanics are inspired by the swarming or collaborative behavior of biological populations. From the results it has been observed that PSO provides better results and also more computationally efficient.Based on the results obtained using CGA and IGA for the optimization of machining process, the proposed IGA provides better results than the conventional GA. The improved genetic algorithm incorporating a stochastic crossover technique and an artificial initial population scheme is developed to provide a faster search mechanism. Finally, a comparison among these algorithms were made for the specific example of dry turning of SS 420 material and arriving at optimum machining parameters of feed, cutting speed, depth of cut and tool nose radius for minimum surface roughness as the criterion. To summarize, the research work fills in conspicuous gaps between research prototypes and industry requirements, by simulating evolutionary procedures seen in nature that optimize its own systems.
Resumo:
The theme of the thesis is centred around one important aspect of wireless sensor networks; the energy-efficiency.The limited energy source of the sensor nodes calls for design of energy-efficient routing protocols. The schemes for protocol design should try to minimize the number of communications among the nodes to save energy. Cluster based techniques were found energy-efficient. In this method clusters are formed and data from different nodes are collected under a cluster head belonging to each clusters and then forwarded it to the base station.Appropriate cluster head selection process and generation of desirable distribution of the clusters can reduce energy consumption of the network and prolong the network lifetime. In this work two such schemes were developed for static wireless sensor networks.In the first scheme, the energy wastage due to cluster rebuilding incorporating all the nodes were addressed. A tree based scheme is presented to alleviate this problem by rebuilding only sub clusters of the network. An analytical model of energy consumption of proposed scheme is developed and the scheme is compared with existing cluster based scheme. The simulation study proved the energy savings observed.The second scheme concentrated to build load-balanced energy efficient clusters to prolong the lifetime of the network. A voting based approach to utilise the neighbor node information in the cluster head selection process is proposed. The number of nodes joining a cluster is restricted to have equal sized optimum clusters. Multi-hop communication among the cluster heads is also introduced to reduce the energy consumption. The simulation study has shown that the scheme results in balanced clusters and the network achieves reduction in energy consumption.The main conclusion from the study was the routing scheme should pay attention on successful data delivery from node to base station in addition to the energy-efficiency. The cluster based protocols are extended from static scenario to mobile scenario by various authors. None of the proposals addresses cluster head election appropriately in view of mobility. An elegant scheme for electing cluster heads is presented to meet the challenge of handling cluster durability when all the nodes in the network are moving. The scheme has been simulated and compared with a similar approach.The proliferation of sensor networks enables users with large set of sensor information to utilise them in various applications. The sensor network programming is inherently difficult due to various reasons. There must be an elegant way to collect the data gathered by sensor networks with out worrying about the underlying structure of the network. The final work presented addresses a way to collect data from a sensor network and present it to the users in a flexible way.A service oriented architecture based application is built and data collection task is presented as a web service. This will enable composition of sensor data from different sensor networks to build interesting applications. The main objective of the thesis was to design energy-efficient routing schemes for both static as well as mobile sensor networks. A progressive approach was followed to achieve this goal.
Resumo:
One major component of power system operation is generation scheduling. The objective of the work is to develop efficient control strategies to the power scheduling problems through Reinforcement Learning approaches. The three important active power scheduling problems are Unit Commitment, Economic Dispatch and Automatic Generation Control. Numerical solution methods proposed for solution of power scheduling are insufficient in handling large and complex systems. Soft Computing methods like Simulated Annealing, Evolutionary Programming etc., are efficient in handling complex cost functions, but find limitation in handling stochastic data existing in a practical system. Also the learning steps are to be repeated for each load demand which increases the computation time.Reinforcement Learning (RL) is a method of learning through interactions with environment. The main advantage of this approach is it does not require a precise mathematical formulation. It can learn either by interacting with the environment or interacting with a simulation model. Several optimization and control problems have been solved through Reinforcement Learning approach. The application of Reinforcement Learning in the field of Power system has been a few. The objective is to introduce and extend Reinforcement Learning approaches for the active power scheduling problems in an implementable manner. The main objectives can be enumerated as:(i) Evolve Reinforcement Learning based solutions to the Unit Commitment Problem.(ii) Find suitable solution strategies through Reinforcement Learning approach for Economic Dispatch. (iii) Extend the Reinforcement Learning solution to Automatic Generation Control with a different perspective. (iv) Check the suitability of the scheduling solutions to one of the existing power systems.First part of the thesis is concerned with the Reinforcement Learning approach to Unit Commitment problem. Unit Commitment Problem is formulated as a multi stage decision process. Q learning solution is developed to obtain the optimwn commitment schedule. Method of state aggregation is used to formulate an efficient solution considering the minimwn up time I down time constraints. The performance of the algorithms are evaluated for different systems and compared with other stochastic methods like Genetic Algorithm.Second stage of the work is concerned with solving Economic Dispatch problem. A simple and straight forward decision making strategy is first proposed in the Learning Automata algorithm. Then to solve the scheduling task of systems with large number of generating units, the problem is formulated as a multi stage decision making task. The solution obtained is extended in order to incorporate the transmission losses in the system. To make the Reinforcement Learning solution more efficient and to handle continuous state space, a fimction approximation strategy is proposed. The performance of the developed algorithms are tested for several standard test cases. Proposed method is compared with other recent methods like Partition Approach Algorithm, Simulated Annealing etc.As the final step of implementing the active power control loops in power system, Automatic Generation Control is also taken into consideration.Reinforcement Learning has already been applied to solve Automatic Generation Control loop. The RL solution is extended to take up the approach of common frequency for all the interconnected areas, more similar to practical systems. Performance of the RL controller is also compared with that of the conventional integral controller.In order to prove the suitability of the proposed methods to practical systems, second plant ofNeyveli Thennal Power Station (NTPS IT) is taken for case study. The perfonnance of the Reinforcement Learning solution is found to be better than the other existing methods, which provide the promising step towards RL based control schemes for practical power industry.Reinforcement Learning is applied to solve the scheduling problems in the power industry and found to give satisfactory perfonnance. Proposed solution provides a scope for getting more profit as the economic schedule is obtained instantaneously. Since Reinforcement Learning method can take the stochastic cost data obtained time to time from a plant, it gives an implementable method. As a further step, with suitable methods to interface with on line data, economic scheduling can be achieved instantaneously in a generation control center. Also power scheduling of systems with different sources such as hydro, thermal etc. can be looked into and Reinforcement Learning solutions can be achieved.
Resumo:
Demand on magnesium and its alloys is increased significantly in the automotive industry because of their great potential in reducing the weight of components, thus resulting in improvement in fuel efficiency of the vehicle. To date, most of Mg products have been fabricated by casting, especially, by die-casting because of its high productivity, suitable strength, acceptable quality & dimensional accuracy and the components produced through sand, gravity and low pressure die casting are small extent. In fact, higher solidification rate is possible only in high pressure die casting, which results in finer grain size. However, achieving high cooling rate in gravity casting using sand and permanent moulds is a difficult task, which ends with a coarser grain nature and exhibit poor mechanical properties, which is an important aspect of the performance in industrial applications. Grain refinement is technologically attractive because it generally does not adversely affect ductility and toughness, contrary to most other strengthening methods. Therefore formation of fine grain structure in these castings is crucial, in order to improve the mechanical properties of these cast components. Therefore, the present investigation is “GRAIN REFINEMENT STUDIES ON Mg AND Mg-Al BASED ALLOYS”. The primary objective of this present investigation is to study the effect of various grain refining inoculants (Al-4B, Al- 5TiB2 master alloys, Al4C3, Charcoal particles) on Pure Mg and Mg-Al alloys such as AZ31, AZ91 and study their grain refining mechanisms. The second objective of this work is to study the effect of superheating process on the grain size of AZ31, AZ91 Mg alloys with and without inoculants addition. In addition, to study the effect of grain refinement on the mechanical properties of Mg and Mg-Al alloys. The thesis is well organized with seven chapters and the details of the studies are given below in detail.
Resumo:
Clustering schemes improve energy efficiency of wireless sensor networks. The inclusion of mobility as a new criterion for the cluster creation and maintenance adds new challenges for these clustering schemes. Cluster formation and cluster head selection is done on a stochastic basis for most of the algorithms. In this paper we introduce a cluster formation and routing algorithm based on a mobility factor. The proposed algorithm is compared with LEACH-M protocol based on metrics viz. number of cluster head transitions, average residual energy, number of alive nodes and number of messages lost
Resumo:
Decision trees are very powerful tools for classification in data mining tasks that involves different types of attributes. When coming to handling numeric data sets, usually they are converted first to categorical types and then classified using information gain concepts. Information gain is a very popular and useful concept which tells you, whether any benefit occurs after splitting with a given attribute as far as information content is concerned. But this process is computationally intensive for large data sets. Also popular decision tree algorithms like ID3 cannot handle numeric data sets. This paper proposes statistical variance as an alternative to information gain as well as statistical mean to split attributes in completely numerical data sets. The new algorithm has been proved to be competent with respect to its information gain counterpart C4.5 and competent with many existing decision tree algorithms against the standard UCI benchmarking datasets using the ANOVA test in statistics. The specific advantages of this proposed new algorithm are that it avoids the computational overhead of information gain computation for large data sets with many attributes, as well as it avoids the conversion to categorical data from huge numeric data sets which also is a time consuming task. So as a summary, huge numeric datasets can be directly submitted to this algorithm without any attribute mappings or information gain computations. It also blends the two closely related fields statistics and data mining
Resumo:
Fingerprint based authentication systems are one of the cost-effective biometric authentication techniques employed for personal identification. As the data base population increases, fast identification/recognition algorithms are required with high accuracy. Accuracy can be increased using multimodal evidences collected by multiple biometric traits. In this work, consecutive fingerprint images are taken, global singularities are located using directional field strength and their local orientation vector is formulated with respect to the base line of the finger. Feature level fusion is carried out and a 32 element feature template is obtained. A matching score is formulated for the identification and 100% accuracy was obtained for a database of 300 persons. The polygonal feature vector helps to reduce the size of the feature database from the present 70-100 minutiae features to just 32 features and also a lower matching threshold can be fixed compared to single finger based identification
Resumo:
The main objective of this thesis is to design and develop spectral signature based chipless RFID tags Multiresonators are essential component of spectral signature based chipless tags. To enhance the data coding capacity in spectral signature based tags require large number of resonances in a limited bandwidth. The frequency of the resonators have to be close to each other. To achieve this condition, the quality factor of each resonance needs to be high. The thesis discusses about various types of multiresonators, their practical implementation and how they can be used in design. Encoding of data into spectral domain is another challenge in chipless tag design. Here, the technique used is the presence or absence encoding technique. The presence of a resonance is used to encode Logic 1 and absence of a speci c resonance is used to encode Logic 0. Di erent types of multiresonators such as open stub multiresonators, coupled bunch hairpin resonators and shorted slot ground ring resonator are proposed in this thesis.
Resumo:
From the early stages of the twentieth century, polyaniline (PANI), a well-known and extensively studied conducting polymer has captured the attention of scientific community owing to its interesting electrical and optical properties. Starting from its structural properties, to the currently pursued optical, electrical and electrochemical properties, extensive investigations on pure PANI and its composites are still much relevant to explore its potentialities to the maximum extent. The synthesis of highly crystalline PANI films with ordered structure and high electrical conductivity has not been pursued in depth yet. Recently, nanostructured PANI and the nanocomposites of PANI have attracted a great deal of research attention owing to the possibilities of applications in optical switching devices, optoelectronics and energy storage devices. The work presented in the thesis is centered around the realization of highly conducting and structurally ordered PANI and its composites for applications mainly in the areas of nonlinear optics and electrochemical energy storage. Out of the vast variety of application fields of PANI, these two areas are specifically selected for the present studies, because of the following observations. The non-linear optical properties and the energy storing properties of PANI depend quite sensitively on the extent of conjugation of the polymer structure, the type and concentration of the dopants added and the type and size of the nano particles selected for making the nanocomposites. The first phase of the work is devoted to the synthesis of highly ordered and conducting films of PANI doped with various dopants and the structural, morphological and electrical characterization followed by the synthesis of metal nanoparticles incorporated PANI samples and the detailed optical characterization in the linear and nonlinear regimes. The second phase of the work comprises the investigations on the prospects of PANI in realizing polymer based rechargeable lithium ion cells with the inherent structural flexibility of polymer systems and environmental safety and stability. Secondary battery systems have become an inevitable part of daily life. They can be found in most of the portable electronic gadgets and recently they have started powering automobiles, although the power generated is low. The efficient storage of electrical energy generated from solar cells is achieved by using suitable secondary battery systems. The development of rechargeable battery systems having excellent charge storage capacity, cyclability, environmental friendliness and flexibility has yet to be realized in practice. Rechargeable Li-ion cells employing cathode active materials like LiCoO2, LiMn2O4, LiFePO4 have got remarkable charge storage capacity with least charge leakage when not in use. However, material toxicity, chance of cell explosion and lack of effective cell recycling mechanism pose significant risk factors which are to be addressed seriously. These cells also lack flexibility in their design due to the structural characteristics of the electrode materials. Global research is directed towards identifying new class of electrode materials with less risk factors and better structural stability and flexibility. Polymer based electrode materials with inherent flexibility, stability and eco-friendliness can be a suitable choice. One of the prime drawbacks of polymer based cathode materials is the low electronic conductivity. Hence the real task with this class of materials is to get better electronic conductivity with good electrical storage capability. Electronic conductivity can be enhanced by using proper dopants. In the designing of rechargeable Li-ion cells with polymer based cathode active materials, the key issue is to identify the optimum lithiation of the polymer cathode which can ensure the highest electronic conductivity and specific charge capacity possible The development of conducting polymer based rechargeable Li-ion cells with high specific capacity and excellent cycling characteristics is a highly competitive area among research and development groups, worldwide. Polymer based rechargeable batteries are specifically attractive due to the environmentally benign nature and the possible constructional flexibility they offer. Among polymers having electrical transport properties suitable for rechargeable battery applications, polyaniline is the most favoured one due to its tunable electrical conducting properties and the availability of cost effective precursor materials for its synthesis. The performance of a battery depends significantly on the characteristics of its integral parts, the cathode, anode and the electrolyte, which in turn depend on the materials used. Many research groups are involved in developing new electrode and electrolyte materials to enhance the overall performance efficiency of the battery. Currently explored electrolytes for Li ion battery applications are in liquid or gel form, which makes well-defined sealing essential. The use of solid electrolytes eliminates the need for containment of liquid electrolytes, which will certainly simplify the cell design and improve the safety and durability. The other advantages of polymer electrolytes include dimensional stability, safety and the ability to prevent lithium dendrite formation. One of the ultimate aims of the present work is to realize all solid state, flexible and environment friendly Li-ion cells with high specific capacity and excellent cycling stability. Part of the present work is hence focused on identifying good polymer based solid electrolytes essential for realizing all solid state polymer based Li ion cells.The present work is an attempt to study the versatile roles of polyaniline in two different fields of technological applications like nonlinear optics and energy storage. Conducting form of doped PANI films with good extent of crystallinity have been realized using a level surface assisted casting method in addition to the generally employed technique of spin coating. Metal nanoparticles embedded PANI offers a rich source for nonlinear optical studies and hence gold and silver nanoparticles have been used for making the nanocomposites in bulk and thin film forms. These PANI nanocomposites are found to exhibit quite dominant third order optical non-linearity. The highlight of these studies is the observation of the interesting phenomenon of the switching between saturable absorption (SA) and reverse saturable absorption (RSA) in the films of Ag/PANI and Au/PANI nanocomposites, which offers prospects of applications in optical switching. The investigations on the energy storage prospects of PANI were carried out on Li enriched PANI which was used as the cathode active material for assembling rechargeable Li-ion cells. For Li enrichment or Li doping of PANI, n-Butyllithium (n-BuLi) in hexanes was used. The Li doping as well as the Li-ion cell assembling were carried out in an argon filled glove box. Coin cells were assembled with Li doped PANI with different doping concentrations, as the cathode, LiPF6 as the electrolyte and Li metal as the anode. These coin cells are found to show reasonably good specific capacity around 22mAh/g and excellent cycling stability and coulombic efficiency around 99%. To improve the specific capacity, composites of Li doped PANI with inorganic cathode active materials like LiFePO4 and LiMn2O4 were synthesized and coin cells were assembled as mentioned earlier to assess the electrochemical capability. The cells assembled using the composite cathodes are found to show significant enhancement in specific capacity to around 40mAh/g. One of the other interesting observations is the complete blocking of the adverse effects of Jahn-Teller distortion, when the composite cathode, PANI-LiMn2O4 is used for assembling the Li-ion cells. This distortion is generally observed, near room temperature, when LiMn2O4 is used as the cathode, which significantly reduces the cycling stability of the cells.