178 resultados para Objective functions


Relevância:

60.00% 60.00%

Publicador:

Resumo:

The present study simulates a two-stage silica gel + water adsorption desalination (AD) and chiller system. The adsorber system thermally compresses the low pressure steam generated in the evaporator to the condenser pressure in two stages. Unlike a standalone adsorption chiller unit which operates in a closed cycle the present system is an open cycle wherein the condensed desalinated water is not fed back to the evaporator. The mathematical relations formulated in the current study are based on conservation of mass and energy along with isotherm relation and kinetics for RD-type silica gel + water pair. Various constitutive relations for each component namely the evaporator, adsorber and condenser are integrated in the model. The dynamics of heat exchanger are modeled using LMTD method, and LDF model is used to predict the dynamic characteristic of the adsorber bed. The system performance indicators namely, specific cooling capacity (SCC), specific daily water production (SDWP) and coefficient of performance (COP) are used as objective functions to optimize the system. The novelty of the present work is in introduction of inter-stage pressure as a new parameter for optimizing the two-stage operation of AD chiller system. (C) 2014 Elsevier Ltd. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We consider a scenario in which a wireless sensor network is formed by randomly deploying n sensors to measure some spatial function over a field, with the objective of computing a function of the measurements and communicating it to an operator station. We restrict ourselves to the class of type-threshold functions (as defined in the work of Giridhar and Kumar, 2005), of which max, min, and indicator functions are important examples: our discussions are couched in terms of the max function. We view the problem as one of message-passing distributed computation over a geometric random graph. The network is assumed to be synchronous, and the sensors synchronously measure values and then collaborate to compute and deliver the function computed with these values to the operator station. Computation algorithms differ in (1) the communication topology assumed and (2) the messages that the nodes need to exchange in order to carry out the computation. The focus of our paper is to establish (in probability) scaling laws for the time and energy complexity of the distributed function computation over random wireless networks, under the assumption of centralized contention-free scheduling of packet transmissions. First, without any constraint on the computation algorithm, we establish scaling laws for the computation time and energy expenditure for one-time maximum computation. We show that for an optimal algorithm, the computation time and energy expenditure scale, respectively, as Theta(radicn/log n) and Theta(n) asymptotically as the number of sensors n rarr infin. Second, we analyze the performance of three specific computation algorithms that may be used in specific practical situations, namely, the tree algorithm, multihop transmission, and the Ripple algorithm (a type of gossip algorithm), and obtain scaling laws for the computation time and energy expenditure as n rarr infin. In particular, we show that the computation time for these algorithms scales as Theta(radicn/lo- g n), Theta(n), and Theta(radicn log n), respectively, whereas the energy expended scales as , Theta(n), Theta(radicn/log n), and Theta(radicn log n), respectively. Finally, simulation results are provided to show that our analysis indeed captures the correct scaling. The simulations also yield estimates of the constant multipliers in the scaling laws. Our analyses throughout assume a centralized optimal scheduler, and hence, our results can be viewed as providing bounds for the performance with practical distributed schedulers.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper we consider the problem of learning an n × n kernel matrix from m(1) similarity matrices under general convex loss. Past research have extensively studied the m = 1 case and have derived several algorithms which require sophisticated techniques like ACCP, SOCP, etc. The existing algorithms do not apply if one uses arbitrary losses and often can not handle m > 1 case. We present several provably convergent iterative algorithms, where each iteration requires either an SVM or a Multiple Kernel Learning (MKL) solver for m > 1 case. One of the major contributions of the paper is to extend the well knownMirror Descent(MD) framework to handle Cartesian product of psd matrices. This novel extension leads to an algorithm, called EMKL, which solves the problem in O(m2 log n 2) iterations; in each iteration one solves an MKL involving m kernels and m eigen-decomposition of n × n matrices. By suitably defining a restriction on the objective function, a faster version of EMKL is proposed, called REKL,which avoids the eigen-decomposition. An alternative to both EMKL and REKL is also suggested which requires only an SVMsolver. Experimental results on real world protein data set involving several similarity matrices illustrate the efficacy of the proposed algorithms.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Establishing functional relationships between multi-domain protein sequences is a non-trivial task. Traditionally, delineating functional assignment and relationships of proteins requires domain assignments as a prerequisite. This process is sensitive to alignment quality and domain definitions. In multi-domain proteins due to multiple reasons, the quality of alignments is poor. We report the correspondence between the classification of proteins represented as full-length gene products and their functions. Our approach differs fundamentally from traditional methods in not performing the classification at the level of domains. Our method is based on an alignment free local matching scores (LMS) computation at the amino-acid sequence level followed by hierarchical clustering. As there are no gold standards for full-length protein sequence classification, we resorted to Gene Ontology and domain-architecture based similarity measures to assess our classification. The final clusters obtained using LMS show high functional and domain architectural similarities. Comparison of the current method with alignment based approaches at both domain and full-length protein showed superiority of the LMS scores. Using this method we have recreated objective relationships among different protein kinase sub-families and also classified immunoglobulin containing proteins where sub-family definitions do not exist currently. This method can be applied to any set of protein sequences and hence will be instrumental in analysis of large numbers of full-length protein sequences.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We present a generalization of the finite volume evolution Galerkin scheme [M. Lukacova-Medvid'ova,J. Saibertov'a, G. Warnecke, Finite volume evolution Galerkin methods for nonlinear hyperbolic systems, J. Comp. Phys. (2002) 183 533-562; M. Luacova-Medvid'ova, K.W. Morton, G. Warnecke, Finite volume evolution Galerkin (FVEG) methods for hyperbolic problems, SIAM J. Sci. Comput. (2004) 26 1-30] for hyperbolic systems with spatially varying flux functions. Our goal is to develop a genuinely multi-dimensional numerical scheme for wave propagation problems in a heterogeneous media. We illustrate our methodology for acoustic waves in a heterogeneous medium but the results can be generalized to more complex systems. The finite volume evolution Galerkin (FVEG) method is a predictor-corrector method combining the finite volume corrector step with the evolutionary predictor step. In order to evolve fluxes along the cell interfaces we use multi-dimensional approximate evolution operator. The latter is constructed using the theory of bicharacteristics under the assumption of spatially dependent wave speeds. To approximate heterogeneous medium a staggered grid approach is used. Several numerical experiments for wave propagation with continuous as well as discontinuous wave speeds confirm the robustness and reliability of the new FVEG scheme.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The hydrodynamic modes and the velocity autocorrelation functions for a dilute sheared inelastic fluid are analyzed using an expansion in the parameter epsilon=(1-e)(1/2), where e is the coefficient of restitution. It is shown that the hydrodynamic modes for a sheared inelastic fluid are very different from those for an elastic fluid in the long-wave limit, since energy is not a conserved variable when the wavelength of perturbations is larger than the ``conduction length.'' In an inelastic fluid under shear, there are three coupled modes, the mass and the momenta in the plane of shear, which have a decay rate proportional to k(2/3) in the limit k -> 0, if the wave vector has a component along the flow direction. When the wave vector is aligned along the gradient-vorticity plane, we find that the scaling of the growth rate is similar to that for an elastic fluid. The Fourier transforms of the velocity autocorrelation functions are calculated for a steady shear flow correct to leading order in an expansion in epsilon. The time dependence of the autocorrelation function in the long-time limit is obtained by estimating the integral of the Fourier transform over wave number space. It is found that the autocorrelation functions for the velocity in the flow and gradient directions decay proportional to t(-5/2) in two dimensions and t(-15/4) in three dimensions. In the vorticity direction, the decay of the autocorrelation function is proportional to t(-3) in two dimensions and t(-7/2) in three dimensions.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A new finite element is developed for free vibration analysis of high speed rotating beams using basis functions which use a linear combination of the solution of the governing static differential equation of a stiff-string and a cubic polynomial. These new shape functions depend on rotation speed and element position along the beam and account for the centrifugal stiffening effect. The natural frequencies predicted by the proposed element are compared with an element with stiff-string, cubic polynomial and quintic polynomial shape functions. It is found that the new element exhibits superior convergence compared to the other basis functions.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A fuzzy waste-load allocation model, FWLAM, is developed for water quality management of a river system using fuzzy multiple-objective optimization. An important feature of this model is its capability to incorporate the aspirations and conflicting objectives of the pollution control agency and dischargers. The vagueness associated with specifying the water quality criteria and fraction removal levels is modeled in a fuzzy framework. The goals related to the pollution control agency and dischargers are expressed as fuzzy sets. The membership functions of these fuzzy sets are considered to represent the variation of satisfaction levels of the pollution control agency and dischargers in attaining their respective goals. Two formulations—namely, the MAX-MIN and MAX-BIAS formulations—are proposed for FWLAM. The MAX-MIN formulation maximizes the minimum satisfaction level in the system. The MAX-BIAS formulation maximizes a bias measure, giving a solution that favors the dischargers. Maximization of the bias measure attempts to keep the satisfaction levels of the dischargers away from the minimum satisfaction level and that of the pollution control agency close to the minimum satisfaction level. Most of the conventional water quality management models use waste treatment cost curves that are uncertain and nonlinear. Unlike such models, FWLAM avoids the use of cost curves. Further, the model provides the flexibility for the pollution control agency and dischargers to specify their aspirations independently.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The literature contains many examples of digital procedures for the analytical treatment of electroencephalograms, but there is as yet no standard by which those techniques may be judged or compared. This paper proposes one method of generating an EEG, based on a computer program for Zetterberg's simulation. It is assumed that the statistical properties of an EEG may be represented by stationary processes having rational transfer functions and achieved by a system of software fillers and random number generators.The model represents neither the neurological mechanism response for generating the EEG, nor any particular type of EEG record; transient phenomena such as spikes, sharp waves and alpha bursts also are excluded. The basis of the program is a valid ‘partial’ statistical description of the EEG; that description is then used to produce a digital representation of a signal which if plotted sequentially, might or might not by chance resemble an EEG, that is unimportant. What is important is that the statistical properties of the series remain those of a real EEG; it is in this sense that the output is a simulation of the EEG. There is considerable flexibility in the form of the output, i.e. its alpha, beta and delta content, which may be selected by the user, the same selected parameters always producing the same statistical output. The filtered outputs from the random number sequences may be scaled to provide realistic power distributions in the accepted EEG frequency bands and then summed to create a digital output signal, the ‘stationary EEG’. It is suggested that the simulator might act as a test input to digital analytical techniques for the EEG, a simulator which would enable at least a substantial part of those techniques to be compared and assessed in an objective manner. The equations necessary to implement the model are given. The program has been run on a DEC1090 computer but is suitable for any microcomputer having more than 32 kBytes of memory; the execution time required to generate a 25 s simulated EEG is in the region of 15 s.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Following Ioffe's method of QCD sum rules the structure functions F2(x) for deep inelastic ep and en scattering are calculated. Valence u-quark and d-quark distributions are obtained in the range 0.1 less, approximate x <0.4 and compared with data. In the case of polarized targets the structure function g1(x) and the asymmetry Image Full-size image are calculated. The latter is in satisfactory agreement in sign and magnitude with experiments for x in the range 0.1< x < 0.4.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

An adaptive learning scheme, based on a fuzzy approximation to the gradient descent method for training a pattern classifier using unlabeled samples, is described. The objective function defined for the fuzzy ISODATA clustering procedure is used as the loss function for computing the gradient. Learning is based on simultaneous fuzzy decisionmaking and estimation. It uses conditional fuzzy measures on unlabeled samples. An exponential membership function is assumed for each class, and the parameters constituting these membership functions are estimated, using the gradient, in a recursive fashion. The induced possibility of occurrence of each class is useful for estimation and is computed using 1) the membership of the new sample in that class and 2) the previously computed average possibility of occurrence of the same class. An inductive entropy measure is defined in terms of induced possibility distribution to measure the extent of learning. The method is illustrated with relevant examples.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Recent work on the violent relaxation of collisionless stellar systems has been based on the notion of a wide class of entropy functions. A theorem concerning entropy increase has been proved. We draw attention to some underlying assumptions that have been ignored in the applications of this theorem to stellar dynamical problems. Once these are taken into account, the use of this theorem is at best heuristic. We present a simple counter-example.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A geometrical structure called the implied minterm structure (IMS) has been developed from the properties of minterms of a threshold function. The IMS is useful for the manual testing of linear separability of switching functions of up to six variables. This testing is done just by inspection of the plot of the function on the IMS.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Transmission loss of a rectangular expansion chamber, the inlet and outlet of which are situated at arbitrary locations of the chamber, i.e., the side wall or the face of the chamber, are analyzed here based on the Green's function of a rectangular cavity with homogeneous boundary conditions. The rectangular chamber Green's function is expressed in terms of a finite number of rigid rectangular cavity mode shapes. The inlet and outlet ports are modeled as uniform velocity pistons. If the size of the piston is small compared to wavelength, then the plane wave excitation is a valid assumption. The velocity potential inside the chamber is expressed by superimposing the velocity potentials of two different configurations. The first configuration is a piston source at the inlet port and a rigid termination at the outlet, and the second one is a piston at the outlet with a rigid termination at the inlet. Pressure inside the chamber is derived from velocity potentials using linear momentum equation. The average pressure acting on the pistons at the inlet and outlet locations is estimated by integrating the acoustic pressure over the piston area in the two constituent configurations. The transfer matrix is derived from the average pressure values and thence the transmission loss is calculated. The results are verified against those in the literature where use has been made of modal expansions and also numerical models (FEM fluid). The transfer matrix formulation for yielding wall rectangular chambers has been derived incorporating the structural–acoustic coupling. Parametric studies are conducted for different inlet and outlet configurations, and the various phenomena occurring in the TL curves that cannot be explained by the classical plane wave theory, are discussed.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We present a new, generic method/model for multi-objective design optimization of laminated composite components using a novel multi-objective optimization algorithm developed on the basis of the Quantum behaved Particle Swarm Optimization (QPSO) paradigm. QPSO is a co-variant of the popular Particle Swarm Optimization (PSO) and has been developed and implemented successfully for the multi-objective design optimization of composites. The problem is formulated with multiple objectives of minimizing weight and the total cost of the composite component to achieve a specified strength. The primary optimization variables are - the number of layers, its stacking sequence (the orientation of the layers) and thickness of each layer. The classical lamination theory is utilized to determine the stresses in the component and the design is evaluated based on three failure criteria; Failure Mechanism based Failure criteria, Maximum stress failure criteria and the Tsai-Wu Failure criteria. The optimization method is validated for a number of different loading configurations - uniaxial, biaxial and bending loads. The design optimization has been carried for both variable stacking sequences as well as fixed standard stacking schemes and a comparative study of the different design configurations evolved has been presented. Also, the performance of QPSO is compared with the conventional PSO.