37 resultados para Array optimization
Resumo:
We consider an array of N Josephson junctions connected in parallel and explore the condition for chaotic synchronization. It is found that the outer junctions can be synchronized while they remain uncorrelated to the inner ones when an external biasing is applied. The stability of the solution is found out for the outer junctions in the synchronization manifold. Symmetry considerations lead to a situation wherein the inner junctions can synchronize for certain values of the parameter. In the presence of a phase difference between the applied fields, all the junctions exhibit phase synchronization. It is also found that chaotic motion changes to periodic in the presence of phase differences.
Resumo:
The proliferation of wireless sensor networks in a large spectrum of applications had been spurered by the rapid advances in MEMS(micro-electro mechanical systems )based sensor technology coupled with low power,Low cost digital signal processors and radio frequency circuits.A sensor network is composed of thousands of low cost and portable devices bearing large sensing computing and wireless communication capabilities. This large collection of tiny sensors can form a robust data computing and communication distributed system for automated information gathering and distributed sensing.The main attractive feature is that such a sensor network can be deployed in remote areas.Since the sensor node is battery powered,all the sensor nodes should collaborate together to form a fault tolerant network so as toprovide an efficient utilization of precious network resources like wireless channel,memory and battery capacity.The most crucial constraint is the energy consumption which has become the prime challenge for the design of long lived sensor nodes.
Resumo:
Faculty of Marine Sciences,Cochin University of Science and Technology
Resumo:
Embedded systems are usually designed for a single or a specified set of tasks. This specificity means the system design as well as its hardware/software development can be highly optimized. Embedded software must meet the requirements such as high reliability operation on resource-constrained platforms, real time constraints and rapid development. This necessitates the adoption of static machine codes analysis tools running on a host machine for the validation and optimization of embedded system codes, which can help meet all of these goals. This could significantly augment the software quality and is still a challenging field.Embedded systems are usually designed for a single or a specified set of tasks. This specificity means the system design as well as its hardware/software development can be highly optimized. Embedded software must meet the requirements such as high reliability operation on resource-constrained platforms, real time constraints and rapid development. This necessitates the adoption of static machine codes analysis tools running on a host machine for the validation and optimization of embedded system codes, which can help meet all of these goals. This could significantly augment the software quality and is still a challenging field.Embedded systems are usually designed for a single or a specified set of tasks. This specificity means the system design as well as its hardware/software development can be highly optimized. Embedded software must meet the requirements such as high reliability operation on resource-constrained platforms, real time constraints and rapid development. This necessitates the adoption of static machine codes analysis tools running on a host machine for the validation and optimization of embedded system codes, which can help meet all of these goals. This could significantly augment the software quality and is still a challenging field.Embedded systems are usually designed for a single or a specified set of tasks. This specificity means the system design as well as its hardware/software development can be highly optimized. Embedded software must meet the requirements such as high reliability operation on resource-constrained platforms, real time constraints and rapid development. This necessitates the adoption of static machine codes analysis tools running on a host machine for the validation and optimization of embedded system codes, which can help meet all of these goals. This could significantly augment the software quality and is still a challenging field.This dissertation contributes to an architecture oriented code validation, error localization and optimization technique assisting the embedded system designer in software debugging, to make it more effective at early detection of software bugs that are otherwise hard to detect, using the static analysis of machine codes. The focus of this work is to develop methods that automatically localize faults as well as optimize the code and thus improve the debugging process as well as quality of the code.Validation is done with the help of rules of inferences formulated for the target processor. The rules govern the occurrence of illegitimate/out of place instructions and code sequences for executing the computational and integrated peripheral functions. The stipulated rules are encoded in propositional logic formulae and their compliance is tested individually in all possible execution paths of the application programs. An incorrect sequence of machine code pattern is identified using slicing techniques on the control flow graph generated from the machine code.An algorithm to assist the compiler to eliminate the redundant bank switching codes and decide on optimum data allocation to banked memory resulting in minimum number of bank switching codes in embedded system software is proposed. A relation matrix and a state transition diagram formed for the active memory bank state transition corresponding to each bank selection instruction is used for the detection of redundant codes. Instances of code redundancy based on the stipulated rules for the target processor are identified.This validation and optimization tool can be integrated to the system development environment. It is a novel approach independent of compiler/assembler, applicable to a wide range of processors once appropriate rules are formulated. Program states are identified mainly with machine code pattern, which drastically reduces the state space creation contributing to an improved state-of-the-art model checking. Though the technique described is general, the implementation is architecture oriented, and hence the feasibility study is conducted on PIC16F87X microcontrollers. The proposed tool will be very useful in steering novices towards correct use of difficult microcontroller features in developing embedded systems.
Resumo:
Data centre is a centralized repository,either physical or virtual,for the storage,management and dissemination of data and information organized around a particular body and nerve centre of the present IT revolution.Data centre are expected to serve uniinterruptedly round the year enabling them to perform their functions,it consumes enormous energy in the present scenario.Tremendous growth in the demand from IT Industry made it customary to develop newer technologies for the better operation of data centre.Energy conservation activities in data centre mainly concentrate on the air conditioning system since it is the major mechanical sub-system which consumes considerable share of the total power consumption of the data centre.The data centre energy matrix is best represented by power utilization efficiency(PUE),which is defined as the ratio of the total facility power to the IT equipment power.Its value will be greater than one and a large value of PUE indicates that the sub-systems draw more power from the facility and the performance of the data will be poor from the stand point of energy conservation. PUE values of 1.4 to 1.6 are acievable by proper design and management techniques.Optimizing the air conditioning systems brings enormous opportunity in bringing down the PUE value.The air conditioning system can be optimized by two approaches namely,thermal management and air flow management.thermal management systems are now introduced by some companies but they are highly sophisticated and costly and do not catch much attention in the thumb rules.
Resumo:
This work identifies the importance of plenum pressure on the performance of the data centre. The present methodology followed in the industry considers the pressure drop across the tile as a dependant variable, but it is shown in this work that this is the only one independent variable that is responsible for the entire flow dynamics in the data centre, and any design or assessment procedure must consider the pressure difference across the tile as the primary independent variable. This concept is further explained by the studies on the effect of dampers on the flow characteristics. The dampers have found to introduce an additional pressure drop there by reducing the effective pressure drop across the tile. The effect of damper is to change the flow both in quantitative and qualitative aspects. But the effect of damper on the flow in the quantitative aspect is only considered while using the damper as an aid for capacity control. Results from the present study suggest that the use of dampers must be avoided in data centre and well designed tiles which give required flow rates must be used in the appropriate locations. In the present study the effect of hot air recirculation is studied with suitable assumptions. It identifies that, the pressure drop across the tile is a dominant parameter which governs the recirculation. The rack suction pressure of the hardware along with the pressure drop across the tile determines the point of recirculation in the cold aisle. The positioning of hardware in the racks play an important role in controlling the recirculation point. The present study is thus helpful in the design of data centre air flow, based on the theory of jets. The air flow can be modelled both quantitatively and qualitatively based on the results.
Resumo:
In the early 19th century, industrial revolution was fuelled mainly by the development of machine based manufacturing and the increased use of coal. Later on, the focal point shifted to oil, thanks to the mass-production technology, ease of transport/storage and also the (less) environmental issues in comparison with the coal!! By the dawn of 21st century, due to the depletion of oil reserves and pollution resulting from heavy usage of oil the demand for clean energy was on the rising edge. This ever growing demand has propelled research on photovoltaics which has emerged successful and is currently being looked up to as the only solace for meeting our present day energy requirements. The proven PV technology on commercial scale is based on silicon but the recent boom in the demand for photovoltaic modules has in turn created a shortage in supply of silicon. Also the technology is still not accessible to common man. This has onset the research and development work on moderately efficient, eco-friendly and low cost photovoltaic devices (solar cells). Thin film photovoltaic modules have made a breakthrough entry in the PV market on these grounds. Thin films have the potential to revolutionize the present cost structure of solar cells by eliminating the use of the expensive silicon wafers that alone accounts for above 50% of total module manufacturing cost.Well developed thin film photovoltaic technologies are based on amorphous silicon, CdTe and CuInSe2. However the cell fabrication process using amorphous silicon requires handling of very toxic gases (like phosphene, silane and borane) and costly technologies for cell fabrication. In the case of other materials too, there are difficulties like maintaining stoichiometry (especially in large area films), alleged environmental hazards and high cost of indium. Hence there is an urgent need for the development of materials that are easy to prepare, eco-friendly and available in abundance. The work presented in this thesis is an attempt towards the development of a cost-effective, eco-friendly material for thin film solar cells using simple economically viable technique. Sn-based window and absorber layers deposited using Chemical Spray Pyrolysis (CSP) technique have been chosen for the purpose
Resumo:
In the present study the effect of hot air recirculation is studied with suitable assumptions. It identifies that, the pressure drop across the tile is a dominant parameter which governs the recirculation. The rack suction pressure of the hardware along with the pressure drop across the tile determines the point of recirculation in the cold aisle. The positioning of hardware in the racks play an important role in controlling the recirculation point. The present study is thus helpful in the design of data centre air flow, based on the theory of jets. The air flow can be modelled both quantitatively and qualitatively based on the results
Resumo:
In this thesis, different techniques for image analysis of high density microarrays have been investigated. Most of the existing image analysis techniques require prior knowledge of image specific parameters and direct user intervention for microarray image quantification. The objective of this research work was to develop of a fully automated image analysis method capable of accurately quantifying the intensity information from high density microarrays images. The method should be robust against noise and contaminations that commonly occur in different stages of microarray development.
Resumo:
Aim: To develop a new medium for enhanced production of biomass of an aquaculture probiotic Pseudomonas MCCB 103 and its antagonistic phenazine compound, pyocyanin. Methods and Results: Carbon and nitrogen sources and growth factors, such as amino acids and vitamins, were screened initially in a mineral medium for the biomass and antagonistic compound of Pseudomonas MCCB 103. The selected ingredients were further optimized using a full-factorial central composite design of the response surface methodology. The medium optimized as per the model for biomass contained mannitol (20 g l)1), glycerol (20 g l)1), sodium chloride (5 g l)1), urea (3Æ3 g l)1) and mineral salts solution (20 ml l)1), and the one optimized for the antagonistic compound contained mannitol (2 g l)1), glycerol (20 g l)1), sodium chloride (5Æ1 g l)1), urea (3Æ6 g l)1) and mineral salts solution (20 ml l)1). Subsequently, the model was validated experimentally with a biomass increase by 19% and fivefold increase of the antagonistic compound. Conclusion: Significant increase in the biomass and antagonistic compound production could be obtained in the new media. Significance and Impact of the Study: Media formulation and optimization are the primary steps involved in bioprocess technology, an attempt not made so far in the production of aquaculture probiotics
Resumo:
A marine isolate of jáÅêçÅçÅÅìë MCCB 104 has been identified as an aquaculture probiotic antagonistic to sáÄêáç. In the present study different carbon and nitrogen sources and growth factors in a mineral base medium were optimized for enhanced biomass production and antagonistic activity against the target pathogen, sáÄêáç=Ü~êîÉóá, following response surface methodology (RSM). Accordingly the minimum and maximum limits of the selected variables were determined and a set of fifty experiments programmed employing central composite design (CCD) of RSM for the final optimization. The response surface plots of biomass showed similar pattern with that of antagonistic activity, which indicated a strong correlation between the biomass and antagonism. The optimum concentration of the carbon sources, nitrogen sources, and growth factors for both biomass and antagonistic activity were glucose (17.4 g/L), lactose (17 g/L), sodium chloride (16.9 g/L), ammonium chloride (3.3 g/L), and mineral salts solution (18.3 mL/L). © KSBB
Resumo:
Coded OFDM is a transmission technique that is used in many practical communication systems. In a coded OFDM system, source data are coded, interleaved and multiplexed for transmission over many frequency sub-channels. In a conventional coded OFDM system, the transmission power of each subcarrier is the same regardless of the channel condition. However, some subcarrier can suffer deep fading with multi-paths and the power allocated to the faded subcarrier is likely to be wasted. In this paper, we compute the FER and BER bounds of a coded OFDM system given as convex functions for a given channel coder, inter-leaver and channel response. The power optimization is shown to be a convex optimization problem that can be solved numerically with great efficiency. With the proposed power optimization scheme, near-optimum power allocation for a given coded OFDM system and channel response to minimize FER or BER under a constant transmission power constraint is obtained
Resumo:
This work proposes a parallel genetic algorithm for compressing scanned document images. A fitness function is designed with Hausdorff distance which determines the terminating condition. The algorithm helps to locate the text lines. A greater compression ratio has achieved with lesser distortion
Resumo:
This paper presents the design and analysis of a novel machine family—the enclosed-rotor Halbach-array permanentmagnet brushless dcmotors for spacecraft applications. The initial design, selection of major parameters, and air-gap magnetic flux density are estimated using the analytical model of the machine. The proportion of the Halbach array in the machine is optimized using finite element analysis to obtain a near-trapezoidal flux pattern. The machine is found to provide uniform air-gap flux density along the radius, thus avoiding circulating currents in stator conductors and thereby reducing torque ripple. Furthermore, the design is validated with experimental results on a fabricated machine and is found to suit the design requirements of critical spacecraft applications
Resumo:
Short term load forecasting is one of the key inputs to optimize the management of power system. Almost 60-65% of revenue expenditure of a distribution company is against power purchase. Cost of power depends on source of power. Hence any optimization strategy involves optimization in scheduling power from various sources. As the scheduling involves many technical and commercial considerations and constraints, the efficiency in scheduling depends on the accuracy of load forecast. Load forecasting is a topic much visited in research world and a number of papers using different techniques are already presented. The accuracy of forecast for the purpose of merit order dispatch decisions depends on the extent of the permissible variation in generation limits. For a system with low load factor, the peak and the off peak trough are prominent and the forecast should be able to identify these points to more accuracy rather than minimizing the error in the energy content. In this paper an attempt is made to apply Artificial Neural Network (ANN) with supervised learning based approach to make short term load forecasting for a power system with comparatively low load factor. Such power systems are usual in tropical areas with concentrated rainy season for a considerable period of the year