849 resultados para Energy based approach
Resumo:
Three possible contact conditions may prevail at a contact interface depending on the magnitude of normal and tangential loads, that is, stick condition, partial slip condition or gross sliding condition. Numerical techniques have been used to evaluate the stress field under partial slip and gross sliding condition. Cattaneo and Mindlin approach has been adapted to model partial slip condition. Shear strain energy density and normalized strain energy release rate have been evaluated at the surface and in the subsurface region. It is apparent from the present study that the shear strain energy density gives a fair prediction for the nucleation of damage, whereas the propagation of the crack is controlled by normalized strain energy release rate. Further, it has been observed that the intensity of damage strongly depends on coefficient of friction and contact conditions prevailing at the contact interface. (C) 2014 Elsevier B.V. All rights reserved.
Resumo:
Structural fire safety has become one of the key considerations in the design and maintenance of the built infrastructure. Conventionally the fire resistance rating of load bearing Light gauge Steel Frame (LSF) walls is determined based on the standard time-temperature curve given in ISO 834. Recent research has shown that the true fire resistance of building elements exposed to building fires can be less than their fire resistance ratings determined based on standard fire tests. It is questionable whether the standard time-temperature curve truly represents the fuel loads in modern buildings. Therefore an equivalent fire severity approach has been used in the past to obtain fire resistance rating. This is based on the performance of a structural member exposed to a realistic design fire curve in comparison to that of standard fire time-temperature curve. This paper presents the details of research undertaken to develop an energy based time equivalent approach to obtain the fire resistance ratings of LSF walls exposed to realistic design fire curves with respect to standard fire exposure. This approach relates to the amount of energy transferred to the member. The proposed method was used to predict the fire resistance ratings of single and double layer plasterboard lined and externally insulated LSF walls. The predicted fire ratings were compared with the results from finite element analyses and fire design rules for three different wall configurations exposed to both rapid and prolonged fires. The comparison shows that the proposed energy method can be used to obtain the fire resistance ratings of LSF walls in the case of prolonged fires.
Resumo:
We have recently suggested a method (Pallavi Bhattacharyya and K. L. Sebastian, Physical Review E 2013, 87, 062712) for the analysis of coherence in finite-level systems that are coupled to the surroundings and used it to study the process of energy transfer in the Fenna-Matthews-Olson (FMO) complex. The method makes use of adiabatic eigenstates of the Hamiltonian, with a subsequent transformation of the Hamiltonian into a form where the terms responsible for decoherence and population relaxation could be separated out at the lowest order. Thus one can account for decoherence nonperturbatively, and a Markovian type of master equation could be used for evaluating the population relaxation. In this paper, we apply this method to a two-level system as well as to a seven-level system. Comparisons with exact numerical results show that the method works quite well and is in good agreement with numerical calculations. The technique can be applied with ease to systems with larger numbers of levels as well. We also investigate how the presence of correlations among the bath degrees of freedom of the different bacteriochlorophyll a molecules of the FMO Complex affect the rate of energy transfer. Surprisingly, in the cases that we studied, our calculations suggest that the presence of anticorrelations, in contrast to correlations, make the excitation transfer more facile.
Resumo:
Exergetic analysis can provide useful information as it enables the identification of irreversible phenomena bringing about entropy generation and, therefore, exergy losses (also referred to as irreversibilities). As far as human thermal comfort is concerned, irreversibilities can be evaluated based on parameters related to both the occupant and his surroundings. As an attempt to suggest more insights for the exergetic analysis of thermal comfort, this paper calculates irreversibility rates for a sitting person wearing fairly light clothes and subjected to combinations of ambient air and mean radiant temperatures. The thermodynamic model framework relies on the so-called conceptual energy balance equation together with empirical correlations for invoked thermoregulatory heat transfer rates adapted for a clothed body. Results suggested that a minimum irreversibility rate may exist for particular combinations of the aforesaid surrounding temperatures. By separately considering the contribution of each thermoregulatory mechanism, the total irreversibility rate rendered itself more responsive to either convective or radiative clothing-influenced heat transfers, with exergy losses becoming lower if the body is able to transfer more heat (to the ambient) via convection.
Resumo:
This paper reviews some recent results in motion control of marine vehicles using a technique called Interconnection and Damping Assignment Passivity-based Control (IDA-PBC). This approach to motion control exploits the fact that vehicle dynamics can be described in terms of energy storage, distribution, and dissipation, and that the stable equilibrium points of mechanical systems are those at which the potential energy attains a minima. The control forces are used to transform the closed-loop dynamics into a port-controlled Hamiltonian system with dissipation. This is achieved by shaping the energy-storing characteristics of the system, modifying its interconnection structure (how the energy is distributed), and injecting damping. The end result is that the closed-loop system presents a stable equilibrium (hopefully global) at the desired operating point. By forcing the closed-loop dynamics into a Hamiltonian form, the resulting total energy function of the system serves as a Lyapunov function that can be used to demonstrate stability. We consider the tracking and regulation of fully actuated unmanned underwater vehicles, its extension to under-actuated slender vehicles, and also manifold regulation of under-actuated surface vessels. The paper is concluded with an outlook on future research.
Resumo:
In this paper, we consider a passivity-based approach for the design of a control law of multiple ship-roll gyro-stabiliser units. We extend previous work on control of ship roll gyro-stabilisation by considering the problem within a nonlinear framework. In particular, we derive an energy-based model using the port-Hamiltonian theory and then design an active precession controller using passivity-based control interconnection and damping assignment. The design considers the possibility of having multiple gyro-stabiliser units, and the desired potential energy of the system (in closed loop) is chosen to behave like a barrier function, which allows us to enforce constraints on the precession angle of the gyros.
Resumo:
Our study concerns an important current problem, that of diffusion of information in social networks. This problem has received significant attention from the Internet research community in the recent times, driven by many potential applications such as viral marketing and sales promotions. In this paper, we focus on the target set selection problem, which involves discovering a small subset of influential players in a given social network, to perform a certain task of information diffusion. The target set selection problem manifests in two forms: 1) top-k nodes problem and 2) lambda-coverage problem. In the top-k nodes problem, we are required to find a set of k key nodes that would maximize the number of nodes being influenced in the network. The lambda-coverage problem is concerned with finding a set of k key nodes having minimal size that can influence a given percentage lambda of the nodes in the entire network. We propose a new way of solving these problems using the concept of Shapley value which is a well known solution concept in cooperative game theory. Our approach leads to algorithms which we call the ShaPley value-based Influential Nodes (SPINs) algorithms for solving the top-k nodes problem and the lambda-coverage problem. We compare the performance of the proposed SPIN algorithms with well known algorithms in the literature. Through extensive experimentation on four synthetically generated random graphs and six real-world data sets (Celegans, Jazz, NIPS coauthorship data set, Netscience data set, High-Energy Physics data set, and Political Books data set), we show that the proposed SPIN approach is more powerful and computationally efficient. Note to Practitioners-In recent times, social networks have received a high level of attention due to their proven ability in improving the performance of web search, recommendations in collaborative filtering systems, spreading a technology in the market using viral marketing techniques, etc. It is well known that the interpersonal relationships (or ties or links) between individuals cause change or improvement in the social system because the decisions made by individuals are influenced heavily by the behavior of their neighbors. An interesting and key problem in social networks is to discover the most influential nodes in the social network which can influence other nodes in the social network in a strong and deep way. This problem is called the target set selection problem and has two variants: 1) the top-k nodes problem, where we are required to identify a set of k influential nodes that maximize the number of nodes being influenced in the network and 2) the lambda-coverage problem which involves finding a set of influential nodes having minimum size that can influence a given percentage lambda of the nodes in the entire network. There are many existing algorithms in the literature for solving these problems. In this paper, we propose a new algorithm which is based on a novel interpretation of information diffusion in a social network as a cooperative game. Using this analogy, we develop an algorithm based on the Shapley value of the underlying cooperative game. The proposed algorithm outperforms the existing algorithms in terms of generality or computational complexity or both. Our results are validated through extensive experimentation on both synthetically generated and real-world data sets.
Resumo:
Artificial Neural Networks (ANNs) have recently been proposed as an alterative method for salving certain traditional problems in power systems where conventional techniques have not achieved the desired speed, accuracy or efficiency. This paper presents application of ANN where the aim is to achieve fast voltage stability margin assessment of power network in an energy control centre (ECC), with reduced number of appropriate inputs. L-index has been used for assessing voltage stability margin. Investigations are carried out on the influence of information encompassed in input vector and target out put vector, on the learning time and test performance of multi layer perceptron (MLP) based ANN model. LP based algorithm for voltage stability improvement, is used for generating meaningful training patterns in the normal operating range of the system. From the generated set of training patterns, appropriate training patterns are selected based on statistical correlation process, sensitivity matrix approach, contingency ranking approach and concentric relaxation method. Simulation results on a 24 bus EHV system, 30 bus modified IEEE system, and a 82 bus Indian power network are presented for illustration purposes.
Resumo:
Ampcalculator (AMPC) is a Mathematica (c) based program that was made publicly available some time ago by Unterdorfer and Ecker. It enables the user to compute several processes at one loop (upto O(p(4))) in SU(3) chiral perturbation theory. They include computing matrix elements and form factors for strong and non-leptonic weak processes with at most six external states. It was used to compute some novel processes and was tested against well-known results by the original authors. Here we present the results of several thorough checks of the package. Exhaustive checks performed by the original authors are not publicly available, and hence the present effort. Some new results are obtained from the software especially in the kaon odd-intrinsic parity non-leptonic decay sector involving the coupling G(27). Another illustrative set of amplitudes at tree level we provide is in the context of tau-decays with several mesons including quark mass effects, of use to the BELLE experiment. All eight meson-meson scattering amplitudes have been checked. The Kaon-Compton amplitude has been checked and a minor error in the published results has been pointed out. This exercise is a tutorial-based one, wherein several input and output notebooks are also being made available as ancillary files on the arXiv. Some of the additional notebooks we provide contain explicit expressions that we have used for comparison with established results. The purpose is to encourage users to apply the software to suit their specific needs. An automatic amplitude generator of this type can provide error-free outputs that could be used as inputs for further simplification, and in varied scenarios such as applications of chiral perturbation theory at finite temperature, density and volume. This can also be used by students as a learning aid in low-energy hadron dynamics.
Resumo:
Mobile ad hoc networks (MANETs) is one of the successful wireless network paradigms which offers unrestricted mobility without depending on any underlying infrastructure. MANETs have become an exciting and im- portant technology in recent years because of the rapid proliferation of variety of wireless devices, and increased use of ad hoc networks in various applications. Like any other networks, MANETs are also prone to variety of attacks majorly in routing side, most of the proposed secured routing solutions based on cryptography and authentication methods have greater overhead, which results in latency problems and resource crunch problems, especially in energy side. The successful working of these mechanisms also depends on secured key management involving a trusted third authority, which is generally difficult to implement in MANET environ-ment due to volatile topology. Designing a secured routing algorithm for MANETs which incorporates the notion of trust without maintaining any trusted third entity is an interesting research problem in recent years. This paper propose a new trust model based on cognitive reasoning,which associates the notion of trust with all the member nodes of MANETs using a novel Behaviors-Observations- Beliefs(BOB) model. These trust values are used for detec- tion and prevention of malicious and dishonest nodes while routing the data. The proposed trust model works with the DTM-DSR protocol, which involves computation of direct trust between any two nodes using cognitive knowledge. We have taken care of trust fading over time, rewards, and penalties while computing the trustworthiness of a node and also route. A simulator is developed for testing the proposed algorithm, the results of experiments shows incorporation of cognitive reasoning for computation of trust in routing effectively detects intrusions in MANET environment, and generates more reliable routes for secured routing of data.
Resumo:
Nanocrystalline titania are a robust candidate for various functional applications owing to its non-toxicity, cheap availability, ease of preparation and exceptional photochemical as well as thermal stability. The uniqueness in each lattice structure of titania leads to multifaceted physico-chemical and opto-electronic properties, which yield different functionalities and thus influence their performances in various green energy applications. The high temperature treatment for crystallizing titania triggers inevitable particle growth and the destruction of delicate nanostructural features. Thus, the preparation of crystalline titania with tunable phase/particle size/morphology at low to moderate temperatures using a solution-based approach has paved the way for further exciting areas of research. In this focused review, titania synthesis from hydrothermal/solvothermal method, conventional sol-gel method and sol-gel-assisted method via ultrasonication, photoillumination and ILs, thermolysis and microemulsion routes are discussed. These wet chemical methods have broader visibility, since multiple reaction parameters, such as precursor chemistry, surfactants, chelating agents, solvents, mineralizer, pH of the solution, aging time, reaction temperature/time, inorganic electrolytes, can be easily manipulated to tune the final physical structure. This review sheds light on the stabilization/phase transformation pathways of titania polymorphs like anatase, rutile, brookite and TiO2(B) under a variety of reaction conditions. The driving force for crystallization arising from complex species in solution coupled with pH of the solution and ion species facilitating the orientation of octahedral resulting in a crystalline phase are reviewed in detail. In addition to titanium halide/alkoxide, the nucleation of titania from other precursors like peroxo and layered titanates are also discussed. The nonaqueous route and ball milling-induced titania transformation is briefly outlined; moreover, the lacunae in understanding the concepts and future prospects in this exciting field are suggested.
Resumo:
This paper deals with the economics of gasification facilities in general and IGCC power plants in particular. Regarding the prospects of these systems, passing the technological test is one thing, passing the economic test can be quite another. In this respect, traditional valuations assume constant input and/or output prices. Since this is hardly realistic, we allow for uncertainty in prices. We naturally look at the markets where many of the products involved are regularly traded. Futures markets on commodities are particularly useful for valuing uncertain future cash flows. Thus, revenues and variable costs can be assessed by means of sound financial concepts and actual market data. On the other hand, these complex systems provide a number of flexibility options (e.g., to choose among several inputs, outputs, modes of operation, etc.). Typically, flexibility contributes significantly to the overall value of real assets. Indeed, maximization of the asset value requires the optimal exercise of any flexibility option available. Yet the economic value of flexibility is elusive, the more so under (price) uncertainty. And the right choice of input fuels and/or output products is a main concern for the facility managers. As a particular application, we deal with the valuation of input flexibility. We follow the Real Options approach. In addition to economic variables, we also address technical and environmental issues such as energy efficiency, utility performance characteristics and emissions (note that carbon constraints are looming). Lastly, a brief introduction to some stochastic processes suitable for valuation purposes is provided.
Resumo:
An ab initio approach has been applied to study multiphoton detachment rates for the negative hydrogen ion in the lowest nonvanishing order of perturbation theory. The approach is based on the use of B splines allowing an accurate treatment of the electronic repulsion. Total detachment rates have been determined for two- to six-photon processes as well as partial rates for detachment into the different final symmetries. It is shown that B-spline expansions can yield accurate continuum and bound-state wave functions in a very simple manner. The calculated total rates for two- and three-photon detachment are in good agreement with other perturbative calculations. For more than three-photon detachment little information has been available before now. While the total cross sections show little structure, a fair amount of structure is predicted in the partial cross sections. In the two-photon process, it is shown that the detached electrons mainly have s character. For four- and six-photon processes, the contribution from the d channel is the most important. For three- and five-photon processes p electrons dominate the electron emission spectrum. Detachment rates for s and p electrons show minima as a function of photon energy. © 1994 The American Physical Society.
Resumo:
Today there is a growing interest in the integration of health monitoring applications in portable devices necessitating the development of methods that improve the energy efficiency of such systems. In this paper, we present a systematic approach that enables energy-quality trade-offs in spectral analysis systems for bio-signals, which are useful in monitoring various health conditions as those associated with the heart-rate. To enable such trade-offs, the processed signals are expressed initially in a basis in which significant components that carry most of the relevant information can be easily distinguished from the parts that influence the output to a lesser extent. Such a classification allows the pruning of operations associated with the less significant signal components leading to power savings with minor quality loss since only less useful parts are pruned under the given requirements. To exploit the attributes of the modified spectral analysis system, thresholding rules are determined and adopted at design- and run-time, allowing the static or dynamic pruning of less-useful operations based on the accuracy and energy requirements. The proposed algorithm is implemented on a typical sensor node simulator and results show up-to 82% energy savings when static pruning is combined with voltage and frequency scaling, compared to the conventional algorithm in which such trade-offs were not available. In addition, experiments with numerous cardiac samples of various patients show that such energy savings come with a 4.9% average accuracy loss, which does not affect the system detection capability of sinus-arrhythmia which was used as a test case.
Resumo:
This paper presents a complete, quadratic programming formulation of the standard thermal unit commitment problem in power generation planning, together with a novel iterative optimisation algorithm for its solution. The algorithm, based on a mixed-integer formulation of the problem, considers piecewise linear approximations of the quadratic fuel cost function that are dynamically updated in an iterative way, converging to the optimum; this avoids the requirement of resorting to quadratic programming, making the solution process much quicker. From extensive computational tests on a broad set of benchmark instances of this problem, the algorithm was found to be flexible and capable of easily incorporating different problem constraints. Indeed, it is able to tackle ramp constraints, which although very important in practice were rarely considered in previous publications. Most importantly, optimal solutions were obtained for several well-known benchmark instances, including instances of practical relevance, that are not yet known to have been solved to optimality. Computational experiments and their results showed that the method proposed is both simple and extremely effective.