946 resultados para Particle Swarm Optimization
Resumo:
Two stage processes consisting of precursor preparation by thermal evaporation followed by chalcogenisation in the required atmosphere is found to be a feasible technique for the PV materials such as n-Beta In2S3, p-CulnSe2, p-CulnS2 and p-CuIn(Sel_xSx)2. The growth parameters such as chalcogenisation temperature and duration of chalcogenisation etc have been optimised in the present study.Single phase Beta-In2S3 thin films can be obtained by sulfurising the indium films above 300°C for 45 minutes. Low sulfurisation temperatures required prolonged annealing after the sulfurisation to obtain single phase Beta-1n2S3, which resulted in high material loss. The maximum band gap of 2.58 eV was obtained for the nearly stoichiometric Beta-In2S3 film which was sulfurised at 350°C. This wider band gap, n type Beta-In2S3 can be used as an alternative to toxic CdS as window layer in photovoltaics .The systematic study on the structural optical and electrical properties of CuInSe2 films by varying the process parameters such as the duration of selenization and the selenization temperature led to the conclusion that for the growth of single-phase CuInSe2, the optimum selenization temperature is 350°C and duration is 3 hours. The presence of some binary phases in films for shorter selenization period and lower selenization temperature may be due to the incomplete reaction and indium loss. Optical band gap energy of 1.05 eV obtained for the films under the optimum condition.In order to obtain a closer match to the solar spectrum it is desirable to increase the band gap of the CulnSe2 by a few meV . Further research works were carried out to produce graded band gap CuIn(Se,S)2 absorber films by incorporation of sulfur into CuInSe2. It was observed that when the CulnSe2 prepared by two stage process were post annealed in sulfur atmosphere, the sulfur may be occupying the interstitial positions or forming a CuInS2 phase along with CuInSe2 phase. The sulfur treatment during the selenization process OfCu11 ln9 precursors resulted in Culn (Se,S)2 thin films. A band gap of 1.38 eV was obtained for the CuIn(Se,S)2.The optimised thin films n-beta 1n2S3, p-CulnSe2 and p-Culn(Sel-xSx)2 can be used for fabrication of polycrystalline solar cells.
Resumo:
Controlling the inorganic nitrogen by manipulating carbon / nitrogen ratio is a method gaining importance in aquaculture systems. Nitrogen control is induced by feeding bacteria with carbohydrates and through the subsequent uptake of nitrogen from the water for the synthesis of microbial proteins. The relationship between addition of carbohydrates, reduction of ammonium and the production of microbial protein depends on the microbial conversion coefficient. The carbon / nitrogen ratio in the microbial biomass is related to the carbon contents of the added material. The addition of carbonaceous substrate was found to reduce inorganic nitrogen in shrimp culture ponds and the resultant microbial proteins are taken up by shrimps. Thus, part of the feed protein is replaced and feeding costs are reduced in culture systems.The use of various locally available substrates for periphyton based aquaculture practices increases production and profitability .However, these techniques for extensive shrimp farming have not so far been evaluated. Moreover, an evaluation of artificial substrates together with carbohydrate source based farming system in reducing inorganic nitrogen production in culture systems has not yet been carried-out. Furthermore, variations in water and soil quality, periphyton production and shrimp production of the whole system have also not been determined so-far.This thesis starts with a general introduction , a brief review of the most relevant literature, results of various experiments and concludes with a summary (Chapter — 9). The chapters are organised conforming to the objectives of the present study. The major objectives of this thesis are, to improve the sustainability of shrimp farming by carbohydrate addition and periphyton substrate based shrimp production and to improve the nutrient utilisation in aquaculture systems.
Resumo:
The proliferation of wireless sensor networks in a large spectrum of applications had been spurered by the rapid advances in MEMS(micro-electro mechanical systems )based sensor technology coupled with low power,Low cost digital signal processors and radio frequency circuits.A sensor network is composed of thousands of low cost and portable devices bearing large sensing computing and wireless communication capabilities. This large collection of tiny sensors can form a robust data computing and communication distributed system for automated information gathering and distributed sensing.The main attractive feature is that such a sensor network can be deployed in remote areas.Since the sensor node is battery powered,all the sensor nodes should collaborate together to form a fault tolerant network so as toprovide an efficient utilization of precious network resources like wireless channel,memory and battery capacity.The most crucial constraint is the energy consumption which has become the prime challenge for the design of long lived sensor nodes.
Resumo:
Faculty of Marine Sciences,Cochin University of Science and Technology
Resumo:
Embedded systems are usually designed for a single or a specified set of tasks. This specificity means the system design as well as its hardware/software development can be highly optimized. Embedded software must meet the requirements such as high reliability operation on resource-constrained platforms, real time constraints and rapid development. This necessitates the adoption of static machine codes analysis tools running on a host machine for the validation and optimization of embedded system codes, which can help meet all of these goals. This could significantly augment the software quality and is still a challenging field.Embedded systems are usually designed for a single or a specified set of tasks. This specificity means the system design as well as its hardware/software development can be highly optimized. Embedded software must meet the requirements such as high reliability operation on resource-constrained platforms, real time constraints and rapid development. This necessitates the adoption of static machine codes analysis tools running on a host machine for the validation and optimization of embedded system codes, which can help meet all of these goals. This could significantly augment the software quality and is still a challenging field.Embedded systems are usually designed for a single or a specified set of tasks. This specificity means the system design as well as its hardware/software development can be highly optimized. Embedded software must meet the requirements such as high reliability operation on resource-constrained platforms, real time constraints and rapid development. This necessitates the adoption of static machine codes analysis tools running on a host machine for the validation and optimization of embedded system codes, which can help meet all of these goals. This could significantly augment the software quality and is still a challenging field.Embedded systems are usually designed for a single or a specified set of tasks. This specificity means the system design as well as its hardware/software development can be highly optimized. Embedded software must meet the requirements such as high reliability operation on resource-constrained platforms, real time constraints and rapid development. This necessitates the adoption of static machine codes analysis tools running on a host machine for the validation and optimization of embedded system codes, which can help meet all of these goals. This could significantly augment the software quality and is still a challenging field.This dissertation contributes to an architecture oriented code validation, error localization and optimization technique assisting the embedded system designer in software debugging, to make it more effective at early detection of software bugs that are otherwise hard to detect, using the static analysis of machine codes. The focus of this work is to develop methods that automatically localize faults as well as optimize the code and thus improve the debugging process as well as quality of the code.Validation is done with the help of rules of inferences formulated for the target processor. The rules govern the occurrence of illegitimate/out of place instructions and code sequences for executing the computational and integrated peripheral functions. The stipulated rules are encoded in propositional logic formulae and their compliance is tested individually in all possible execution paths of the application programs. An incorrect sequence of machine code pattern is identified using slicing techniques on the control flow graph generated from the machine code.An algorithm to assist the compiler to eliminate the redundant bank switching codes and decide on optimum data allocation to banked memory resulting in minimum number of bank switching codes in embedded system software is proposed. A relation matrix and a state transition diagram formed for the active memory bank state transition corresponding to each bank selection instruction is used for the detection of redundant codes. Instances of code redundancy based on the stipulated rules for the target processor are identified.This validation and optimization tool can be integrated to the system development environment. It is a novel approach independent of compiler/assembler, applicable to a wide range of processors once appropriate rules are formulated. Program states are identified mainly with machine code pattern, which drastically reduces the state space creation contributing to an improved state-of-the-art model checking. Though the technique described is general, the implementation is architecture oriented, and hence the feasibility study is conducted on PIC16F87X microcontrollers. The proposed tool will be very useful in steering novices towards correct use of difficult microcontroller features in developing embedded systems.
Resumo:
In the early 19th century, industrial revolution was fuelled mainly by the development of machine based manufacturing and the increased use of coal. Later on, the focal point shifted to oil, thanks to the mass-production technology, ease of transport/storage and also the (less) environmental issues in comparison with the coal!! By the dawn of 21st century, due to the depletion of oil reserves and pollution resulting from heavy usage of oil the demand for clean energy was on the rising edge. This ever growing demand has propelled research on photovoltaics which has emerged successful and is currently being looked up to as the only solace for meeting our present day energy requirements. The proven PV technology on commercial scale is based on silicon but the recent boom in the demand for photovoltaic modules has in turn created a shortage in supply of silicon. Also the technology is still not accessible to common man. This has onset the research and development work on moderately efficient, eco-friendly and low cost photovoltaic devices (solar cells). Thin film photovoltaic modules have made a breakthrough entry in the PV market on these grounds. Thin films have the potential to revolutionize the present cost structure of solar cells by eliminating the use of the expensive silicon wafers that alone accounts for above 50% of total module manufacturing cost.Well developed thin film photovoltaic technologies are based on amorphous silicon, CdTe and CuInSe2. However the cell fabrication process using amorphous silicon requires handling of very toxic gases (like phosphene, silane and borane) and costly technologies for cell fabrication. In the case of other materials too, there are difficulties like maintaining stoichiometry (especially in large area films), alleged environmental hazards and high cost of indium. Hence there is an urgent need for the development of materials that are easy to prepare, eco-friendly and available in abundance. The work presented in this thesis is an attempt towards the development of a cost-effective, eco-friendly material for thin film solar cells using simple economically viable technique. Sn-based window and absorber layers deposited using Chemical Spray Pyrolysis (CSP) technique have been chosen for the purpose
Resumo:
We critically discuss relaxation experiments in magnetic systems that can be characterized in terms of an energy barrier distribution, showing that proper normalization of the relaxation data is needed whenever curves corresponding to different temperatures are to be compared. We show how these normalization factors can be obtained from experimental data by using the Tln (t/t0) scaling method without making any assumptions about the nature of the energy barrier distribution. The validity of the procedure is tested using a ferrofluid of Fe3O4 particles.
Resumo:
Màster en Nanociència i Nanotecnologia
Resumo:
Bulk and single-particle properties of hot hyperonic matter are studied within the Brueckner-Hartree-Fock approximation extended to finite temperature. The bare interaction in the nucleon sector is the Argonne V18 potential supplemented with an effective three-body force to reproduce the saturating properties of nuclear matter. The modern Nijmegen NSC97e potential is employed for the hyperon-nucleon and hyperon-hyperon interactions. The effect of temperature on the in-medium effective interaction is found to be, in general, very small and the single-particle potentials differ by at most 25% for temperatures in the range from 0 to 60 MeV. The bulk properties of infinite matter of baryons, either nuclear isospin symmetric or a Beta-stable composition that includes a nonzero fraction of hyperons, are obtained. It is found that the presence of hyperons can modify the thermodynamical properties of the system in a non-negligible way.
Resumo:
Aim: To develop a new medium for enhanced production of biomass of an aquaculture probiotic Pseudomonas MCCB 103 and its antagonistic phenazine compound, pyocyanin. Methods and Results: Carbon and nitrogen sources and growth factors, such as amino acids and vitamins, were screened initially in a mineral medium for the biomass and antagonistic compound of Pseudomonas MCCB 103. The selected ingredients were further optimized using a full-factorial central composite design of the response surface methodology. The medium optimized as per the model for biomass contained mannitol (20 g l)1), glycerol (20 g l)1), sodium chloride (5 g l)1), urea (3Æ3 g l)1) and mineral salts solution (20 ml l)1), and the one optimized for the antagonistic compound contained mannitol (2 g l)1), glycerol (20 g l)1), sodium chloride (5Æ1 g l)1), urea (3Æ6 g l)1) and mineral salts solution (20 ml l)1). Subsequently, the model was validated experimentally with a biomass increase by 19% and fivefold increase of the antagonistic compound. Conclusion: Significant increase in the biomass and antagonistic compound production could be obtained in the new media. Significance and Impact of the Study: Media formulation and optimization are the primary steps involved in bioprocess technology, an attempt not made so far in the production of aquaculture probiotics
Resumo:
A marine isolate of jáÅêçÅçÅÅìë MCCB 104 has been identified as an aquaculture probiotic antagonistic to sáÄêáç. In the present study different carbon and nitrogen sources and growth factors in a mineral base medium were optimized for enhanced biomass production and antagonistic activity against the target pathogen, sáÄêáç=Ü~êîÉóá, following response surface methodology (RSM). Accordingly the minimum and maximum limits of the selected variables were determined and a set of fifty experiments programmed employing central composite design (CCD) of RSM for the final optimization. The response surface plots of biomass showed similar pattern with that of antagonistic activity, which indicated a strong correlation between the biomass and antagonism. The optimum concentration of the carbon sources, nitrogen sources, and growth factors for both biomass and antagonistic activity were glucose (17.4 g/L), lactose (17 g/L), sodium chloride (16.9 g/L), ammonium chloride (3.3 g/L), and mineral salts solution (18.3 mL/L). © KSBB
Squeezed Coherent State Representation of Scalar Field and Particle Production in the Early Universe
Resumo:
The present work is an attempt to explain particle production in the early univese. We argue that nonzero values of the stress-energy tensor evaluated in squeezed vacuum state can be due to particle production and this supports the concept of particle production from zero-point quantum fluctuations. In the present calculation we use the squeezed coherent state introduced by Fan and Xiao [7]. The vacuum expectation values of stressenergy tensor defined prior to any dynamics in the background gravitational field give all information about particle production. Squeezing of the vacuum is achieved by means of the background gravitational field, which plays the role of a parametric amplifier [8]. The present calculation shows that the vacuum expectation value of the energy density and pressure contain terms in addition to the classical zero-point energy terms. The calculation of the particle production probability shows that the probability increases as the squeezing parameter increases, reaches a maximum value, and then decreases.
Squeezed Coherent State Representation of Scalar Field and Particle Production in the Early Universe
Resumo:
The present work is an attempt to explain particle production in the early univese. We argue that nonzero values of the stress-energy tensor evaluated in squeezed vacuum state can be due to particle production and this supports the concept of particle production from zero-point quantum fluctuations. In the present calculation we use the squeezed coherent state introduced by Fan and Xiao [7]. The vacuum expectation values of stressenergy tensor defined prior to any dynamics in the background gravitational field give all information about particle production. Squeezing of the vacuum is achieved by means of the background gravitational field, which plays the role of a parametric amplifier [8]. The present calculation shows that the vacuum expectation value of the energy density and pressure contain terms in addition to the classical zero-point energy terms. The calculation of the particle production probability shows that the probability increases as the squeezing parameter increases, reaches a maximum value, and then decreases.
Resumo:
Coded OFDM is a transmission technique that is used in many practical communication systems. In a coded OFDM system, source data are coded, interleaved and multiplexed for transmission over many frequency sub-channels. In a conventional coded OFDM system, the transmission power of each subcarrier is the same regardless of the channel condition. However, some subcarrier can suffer deep fading with multi-paths and the power allocated to the faded subcarrier is likely to be wasted. In this paper, we compute the FER and BER bounds of a coded OFDM system given as convex functions for a given channel coder, inter-leaver and channel response. The power optimization is shown to be a convex optimization problem that can be solved numerically with great efficiency. With the proposed power optimization scheme, near-optimum power allocation for a given coded OFDM system and channel response to minimize FER or BER under a constant transmission power constraint is obtained