291 resultados para EFFICIENCY OPTIMIZATION

em Indian Institute of Science - Bangalore - Índia


Relevância:

30.00% 30.00%

Publicador:

Resumo:

The theoretical optimization of the design parametersN A ,N D andW P has been done for efficient operation of Au-p-n Si solar cell including thermionic field emission, dependence of lifetime and mobility on impurity concentrations, dependence of absorption coefficient on wavelength, variation of barrier height and hence the optimum thickness ofp region with illumination. The optimized design parametersN D =5×1020 m−3,N A =3×1024 m−3 andW P =11.8 nm yield efficiencyη=17.1% (AM0) andη=19.6% (AM1). These are reduced to 14.9% and 17.1% respectively if the metal layer series resistance and transmittance with ZnS antireflection coating are included. A practical value ofW P =97.0 nm gives an efficiency of 12.2% (AM1).

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Fuel cells are emerging as alternate green power producers for both large power production and for use in automobiles. Hydrogen is seen as the best option as a fuel; however, hydrogen fuel cells require recirculation of unspent hydrogen. A supersonic ejector is an apt device for recirculation in the operating regimes of a hydrogen fuel cell. Optimal ejectors have to be designed to achieve best performances. The use of the vector evaluated particle swarm optimization technique to optimize supersonic ejectors with a focus on its application for hydrogen recirculation in fuel cells is presented here. Two parameters, compression ratio and efficiency, have been identified as the objective functions to be optimized. Their relation to operating and design parameters of ejector is obtained by control volume based analysis using a constant area mixing approximation. The independent parameters considered are the area ratio and the exit Mach number of the nozzle. The optimization is carried out at a particularentrainment ratio and results in a set of nondominated solutions, the Pareto front. A set of such curves can be used for choosing the optimal design parameters of the ejector.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A systematic investigation has been carried out into the optimization of diffraction efficiency (η) of methylene blue sensitized dichromated gelatin (MBDCG) holograms. The influence of the following parameters on η have been studied: prehardener concentration (CH), concentrations of ammonium dichromate (CA) and methylene blue (CM) as photosensitizers, and exposure (E). This study revealed that with CH similar, equals 0.5, CA similar, equals 30, CM similar, equals 0.3, and E similar, equals 400–600, optimum diffraction efficiency of over 80%, can be easily achieved in MBDCG holograms.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

There are a number of large networks which occur in many problems dealing with the flow of power, communication signals, water, gas, transportable goods, etc. Both design and planning of these networks involve optimization problems. The first part of this paper introduces the common characteristics of a nonlinear network (the network may be linear, the objective function may be non linear, or both may be nonlinear). The second part develops a mathematical model trying to put together some important constraints based on the abstraction for a general network. The third part deals with solution procedures; it converts the network to a matrix based system of equations, gives the characteristics of the matrix and suggests two solution procedures, one of them being a new one. The fourth part handles spatially distributed networks and evolves a number of decomposition techniques so that we can solve the problem with the help of a distributed computer system. Algorithms for parallel processors and spatially distributed systems have been described.There are a number of common features that pertain to networks. A network consists of a set of nodes and arcs. In addition at every node, there is a possibility of an input (like power, water, message, goods etc) or an output or none. Normally, the network equations describe the flows amoungst nodes through the arcs. These network equations couple variables associated with nodes. Invariably, variables pertaining to arcs are constants; the result required will be flows through the arcs. To solve the normal base problem, we are given input flows at nodes, output flows at nodes and certain physical constraints on other variables at nodes and we should find out the flows through the network (variables at nodes will be referred to as across variables).The optimization problem involves in selecting inputs at nodes so as to optimise an objective function; the objective may be a cost function based on the inputs to be minimised or a loss function or an efficiency function. The above mathematical model can be solved using Lagrange Multiplier technique since the equalities are strong compared to inequalities. The Lagrange multiplier technique divides the solution procedure into two stages per iteration. Stage one calculates the problem variables % and stage two the multipliers lambda. It is shown that the Jacobian matrix used in stage one (for solving a nonlinear system of necessary conditions) occurs in the stage two also.A second solution procedure has also been imbedded into the first one. This is called total residue approach. It changes the equality constraints so that we can get faster convergence of the iterations.Both solution procedures are found to coverge in 3 to 7 iterations for a sample network.The availability of distributed computer systems — both LAN and WAN — suggest the need for algorithms to solve the optimization problems. Two types of algorithms have been proposed — one based on the physics of the network and the other on the property of the Jacobian matrix. Three algorithms have been deviced, one of them for the local area case. These algorithms are called as regional distributed algorithm, hierarchical regional distributed algorithm (both using the physics properties of the network), and locally distributed algorithm (a multiprocessor based approach with a local area network configuration). The approach used was to define an algorithm that is faster and uses minimum communications. These algorithms are found to converge at the same rate as the non distributed (unitary) case.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Determining the sequence of amino acid residues in a heteropolymer chain of a protein with a given conformation is a discrete combinatorial problem that is not generally amenable for gradient-based continuous optimization algorithms. In this paper we present a new approach to this problem using continuous models. In this modeling, continuous "state functions" are proposed to designate the type of each residue in the chain. Such a continuous model helps define a continuous sequence space in which a chosen criterion is optimized to find the most appropriate sequence. Searching a continuous sequence space using a deterministic optimization algorithm makes it possible to find the optimal sequences with much less computation than many other approaches. The computational efficiency of this method is further improved by combining it with a graph spectral method, which explicitly takes into account the topology of the desired conformation and also helps make the combined method more robust. The continuous modeling used here appears to have additional advantages in mimicking the folding pathways and in creating the energy landscapes that help find sequences with high stability and kinetic accessibility. To illustrate the new approach, a widely used simplifying assumption is made by considering only two types of residues: hydrophobic (H) and polar (P). Self-avoiding compact lattice models are used to validate the method with known results in the literature and data that can be practically obtained by exhaustive enumeration on a desktop computer. We also present examples of sequence design for the HP models of some real proteins, which are solved in less than five minutes on a single-processor desktop computer Some open issues and future extensions are noted.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Higher order LCL filters are essential in meeting the interconnection standard requirement for grid-connected voltage source converters. LCL filters offer better harmonic attenuation and better efficiency at a smaller size when compared to the traditional L filters. The focus of this paper is to analyze the LCL filter design procedure from the point of view of power loss and efficiency. The IEEE 1547-2008 specifications for high-frequency current ripple are used as a major constraint early in the design to ensure that all subsequent optimizations are still compliant with the standards. Power loss in each individual filter component is calculated on a per-phase basis. The total inductance per unit of the LCL filter is varied, and LCL parameter values which give the highest efficiency while simultaneously meeting the stringent standard requirements are identified. The power loss and harmonic output spectrum of the grid-connected LCL filter is experimentally verified, and measurements confirm the predicted trends.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Random Access Scan, which addresses individual flip-flops in a design using a memory array like row and column decoder architecture, has recently attracted widespread attention, due to its potential for lower test application time, test data volume and test power dissipation when compared to traditional Serial Scan. This is because typically only a very limited number of random ``care'' bits in a test response need be modified to create the next test vector. Unlike traditional scan, most flip-flops need not be updated. Test application efficiency can be further improved by organizing the access by word instead of by bit. In this paper we present a new decoder structure that takes advantage of basis vectors and linear algebra to further significantly optimize test application in RAS by performing the write operations on multiple bits consecutively. Simulations performed on benchmark circuits show an average of 2-3 times speed up in test write time compared to conventional RAS.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The hot workability of an Al-Mg-Si alloy has been studied by conducting constant strain-rate compression tests. The temperature range and strain-rate regime selected for the present study were 300-550 degrees C and 0.001-1 s(-1), respectively. On the basis of true stress data, the strain-rate sensitivity values were calculated and used for establishing processing maps following the dynamic materials model. These maps delineate characteristic domains of different dissipative mechanisms. Two domains of dynamic recrystallization (DRX) have been identified which are associated with the peak efficiency of power dissipation (34%) and complete reconstitution of as-cast microstructure. As a result, optimum hot ductility is achieved in the DRX domains. The strain rates at which DRX domains occur are determined by the second-phase particles such as Mg2Si precipitates and intermetallic compounds. The alloy also exhibits microstructural instability in the form of localized plastic deformation in the temperature range 300-350 degrees C and at strain rate 1 s(-1).

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Performance improvement of a micromachined patch antenna operating at 30 GHz with a capacitively coupled feed arrangement is presented here. Such antennas are useful for monolithic integration with active components. Specifically, micromachining can be employed to achieve a low dielectric constant region under the patch which causes (i) the suppression of surface waves and hence the increase in radiation efficiency and (ii) increase in the bandwidth. The performance of such a patch antenna can be significantly improved by selecting a coupled feed arrangement. We have optimized the dimensions and location of the capacitive feeding strip to get the maximum improvement in bandwidth. Since this is a totally planar arrangement, and does not involve any stacked structures, this antenna is easy to fabricate using standard microfabrication techniques. The antenna element thus designed has a -10 dB bandwidth of 1600 MHz

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper investigates a new Glowworm Swarm Optimization (GSO) clustering algorithm for hierarchical splitting and merging of automatic multi-spectral satellite image classification (land cover mapping problem). Amongst the multiple benefits and uses of remote sensing, one of the most important has been its use in solving the problem of land cover mapping. Image classification forms the core of the solution to the land cover mapping problem. No single classifier can prove to classify all the basic land cover classes of an urban region in a satisfactory manner. In unsupervised classification methods, the automatic generation of clusters to classify a huge database is not exploited to their full potential. The proposed methodology searches for the best possible number of clusters and its center using Glowworm Swarm Optimization (GSO). Using these clusters, we classify by merging based on parametric method (k-means technique). The performance of the proposed unsupervised classification technique is evaluated for Landsat 7 thematic mapper image. Results are evaluated in terms of the classification efficiency - individual, average and overall.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We show that the operation and the output power of a quantum heat engine that converts incoherent thermal energy into coherent cavity photons can be optimized by manipulating quantum coherences. The gain or loss in the efficiency at maximum power depends on the details of the output power optimization. Quantum effects tend to enhance the output power and the efficiency as the photon occupation in the cavity is decreased.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Several operational aspects for thermal power plants in general are non-intuitive and involve simultaneous optimization of a number of operational parameters. In the case of solar operated power plants, it is even more difficult due to varying heat source temperatures induced by variability in insolation levels. This paper introduces a quantitative methodology for load regulation of a CO2 based Brayton cycle power plant using the `thermal efficiency and specific work output' coordinate system. The analysis shows that a transcritical CO2 cycle offers more flexibility under part load performance than the supercritical cycle in case of non-solar power plants. However, for concentrated solar power, where efficiency is important, supercritical CO2 cycle fares better than transcritical CO2 cycle. A number of empirical equations relating heat source temperature, high side pressure with efficiency and specific work output are proposed which could assist in generating control algorithms. (C) 2015 Elsevier B.V. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Production of high tip deflection in a piezoelectric bimorph laminar actuator by applying high voltage is limited by many physical constraints. Therefore, piezoelectric bimorph actuator with a rigid extension of non-piezoelectric material at its tip is used to increase the tip deflection of such an actuator. Research on this type of piezoelectric bending actuator is either limited to first order constitutive relations, which do not include non-linear behavior of piezoelectric element at high electric field, or limited to curve fitting techniques. Therefore, this paper considers high electric field, and analytically models tapered piezoelectric bimorph actuator with a rigid extension of non-piezoelectric material at its tip. The stiffness, capacitance, effective tip deflection, block force, output strain energy, output energy density, input electrical energy and energy efficiency of the actuator are calculated analytically. The paper also discusses the multi-objective optimization of this type of actuator subjected to the mechanical and electrical constraints.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper presents a chance-constrained linear programming formulation for reservoir operation of a multipurpose reservoir. The release policy is defined by a chance constraint that the probability of irrigation release in any period equalling or exceeding the irrigation demand is at least equal to a specified value P (called reliability level). The model determines the maximum annual hydropower produced while meeting the irrigation demand at a specified reliability level. The model considers variation in reservoir water level elevation and also the operating range within which the turbine operates. A linear approximation for nonlinear power production function is assumed and the solution obtained within a specified tolerance limit. The inflow into the reservoir is considered random. The chance constraint is converted into its deterministic equivalent using a linear decision rule and inflow probability distribution. The model application is demonstrated through a case study.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A fuzzy waste-load allocation model, FWLAM, is developed for water quality management of a river system using fuzzy multiple-objective optimization. An important feature of this model is its capability to incorporate the aspirations and conflicting objectives of the pollution control agency and dischargers. The vagueness associated with specifying the water quality criteria and fraction removal levels is modeled in a fuzzy framework. The goals related to the pollution control agency and dischargers are expressed as fuzzy sets. The membership functions of these fuzzy sets are considered to represent the variation of satisfaction levels of the pollution control agency and dischargers in attaining their respective goals. Two formulations—namely, the MAX-MIN and MAX-BIAS formulations—are proposed for FWLAM. The MAX-MIN formulation maximizes the minimum satisfaction level in the system. The MAX-BIAS formulation maximizes a bias measure, giving a solution that favors the dischargers. Maximization of the bias measure attempts to keep the satisfaction levels of the dischargers away from the minimum satisfaction level and that of the pollution control agency close to the minimum satisfaction level. Most of the conventional water quality management models use waste treatment cost curves that are uncertain and nonlinear. Unlike such models, FWLAM avoids the use of cost curves. Further, the model provides the flexibility for the pollution control agency and dischargers to specify their aspirations independently.