864 resultados para GFRP reinforcement


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Unit commitment is an optimization task in electric power generation control sector. It involves scheduling the ON/OFF status of the generating units to meet the load demand with minimum generation cost satisfying the different constraints existing in the system. Numerical solutions developed are limited for small systems and heuristic methodologies find difficulty in handling stochastic cost functions associated with practical systems. This paper models Unit Commitment as a multi stage decision task and Reinforcement Learning solution is formulated through one efficient exploration strategy: Pursuit method. The correctness and efficiency of the developed solutions are verified for standard test systems

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Glass fiber reinforced polymer (GFRP) rebars have been identified as an alternate construction material for reinforcing concrete during the last decade primarily due to its strength and durability related characteristics. These materials have strength higher than steel, but exhibit linear stress–strain response up to failure. Furthermore, the modulus of elasticity of GFRP is significantly lower than that of steel. This reduced stiffness often controls the design of the GFRP reinforced concrete elements. In the present investigation, GFRP reinforced beams designed based on limit state principles have been examined to understand their strength and serviceability performance. A block type rotation failure was observed for GFRP reinforced beams, while flexural failure was observed in geometrically similar control beams reinforced with steel rebars. An analytical model has been proposed for strength assessment accounting for the failure pattern observed for GFRP reinforced beams. The serviceability criteria for design of GFRP reinforced beams appear to be governed by maximum crack width. An empirical model has been proposed for predicting the maximum width of the cracks. Deflection of these GFRP rebar reinforced beams has been predicted using an earlier model available in the literature. The results predicted by the analytical model compare well with the experimental data

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper presents the results of a study on the use of rice husk ash (RHA) for property modification of high density polyethylene (HDPE). Rice husk is a waste product of the rice processing industry. It is used widely as a fuel which results in large quantities of RHA. Here, the characterization of RHA has been done with the help of X-ray diffraction (XRD), Inductively Coupled Plasma Atomic Emission Spectroscopy (ICPAES), light scattering based particle size analysis, Fourier transform infrared spectroscopy (FTIR) and Scanning Electron Microscope (SEM). Most reports suggest that RHA when blended directly with polymers without polar groups does not improve the properties of the polymer substantially. In this study RHA is blended with HDPE in the presence of a compatibilizer. The compatibilized HDPE-RHA blend has a tensile strength about 18% higher than that of virgin HDPE. The elongation-at-break is also higher for the compatibilized blend. TGA studies reveal that uncompatibilized as well as compatibilized HDPERHA composites have excellent thermal stability. The results prove that RHA is a valuable reinforcing material for HDPE and the environmental pollution arising from RHA can be eliminated in a profitable way by this technique.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Ein großer Teil der Schäden wie auch der Verluste an Gesundheit und Leben im Erdbebenfall hat mit dem frühzeitigen Versagen von Mauerwerksbauten zu tun. Unbewehrtes Mauerwerk, wie es in vielen Ländern üblich ist, weist naturgemäß einen begrenzten Erdbebenwiderstand auf, da Zugspannungen und Zugkräfte nicht wie bei Stahlbeton- oder Stahlbauten aufgenommen werden können. Aus diesem Grund wurde bereits mit verschiedenen Methoden versucht, die Tragfähigkeit von Mauerwerk im Erdbebenfall zu verbessern. Modernes Mauerwerk kann auch als bewehrtes oder eingefasstes Mauerwerk hergestellt werden. Bei bewehrtem Mauerwerk kann durch die Bewehrung der Widerstand bei Beanspruchung als Scheibe wie als Platte verbessert werden, während durch Einfassung mit Stahlbetonelementen in erster Linie die Scheibentragfähigkeit sowie die Verbindung zu angrenzenden Bauteilen verbessert wird. Eine andere interessante Möglichkeit ist das Aufbringen textiler Mauerwerksverstärkungen oder von hochfesten Lamellen. In dieser Arbeit wird ein ganz anderer Weg beschritten, indem weiche Fugen Spannungsspitzen reduzieren sowie eine höhere Verformbarkeit gewährleiten. Dies ist im Erdbebenfall sehr hilfreich, da die Widerstandfähigkeit eines Bauwerks oder Bauteils letztlich von der Energieaufnahmefähigkeit, also dem Produkt aus Tragfähigkeit und Verformbarkeit bestimmt wird. Wenn also gleichzeitig durch die weichen Fugen keine Schwächung oder sogar eine Tragfähigkeitserhöhung stattfindet, kann der Erdbebenwiderstand gesteigert werden. Im Kern der Dissertation steht die Entwicklung der Baukonstruktion einer Mauerwerkstruktur mit einer neuartigen Ausbildung der Mauerwerksfugen, nämlich Elastomerlager und Epoxydharzkleber anstatt üblichem Dünnbettmörtel. Das Elastomerlager wird zwischen die Steinschichten einer Mauerwerkswand eingefügt und damit verklebt. Die Auswirkung dieses Ansatzes auf das Verhalten der Mauerwerkstruktur wird unter dynamischer und quasi-statischer Last numerisch und experimentell untersucht und dargestellt.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We describe an adaptive, mid-level approach to the wireless device power management problem. Our approach is based on reinforcement learning, a machine learning framework for autonomous agents. We describe how our framework can be applied to the power management problem in both infrastructure and ad~hoc wireless networks. From this thesis we conclude that mid-level power management policies can outperform low-level policies and are more convenient to implement than high-level policies. We also conclude that power management policies need to adapt to the user and network, and that a mid-level power management framework based on reinforcement learning fulfills these requirements.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

One objective of artificial intelligence is to model the behavior of an intelligent agent interacting with its environment. The environment's transformations can be modeled as a Markov chain, whose state is partially observable to the agent and affected by its actions; such processes are known as partially observable Markov decision processes (POMDPs). While the environment's dynamics are assumed to obey certain rules, the agent does not know them and must learn. In this dissertation we focus on the agent's adaptation as captured by the reinforcement learning framework. This means learning a policy---a mapping of observations into actions---based on feedback from the environment. The learning can be viewed as browsing a set of policies while evaluating them by trial through interaction with the environment. The set of policies is constrained by the architecture of the agent's controller. POMDPs require a controller to have a memory. We investigate controllers with memory, including controllers with external memory, finite state controllers and distributed controllers for multi-agent systems. For these various controllers we work out the details of the algorithms which learn by ascending the gradient of expected cumulative reinforcement. Building on statistical learning theory and experiment design theory, a policy evaluation algorithm is developed for the case of experience re-use. We address the question of sufficient experience for uniform convergence of policy evaluation and obtain sample complexity bounds for various estimators. Finally, we demonstrate the performance of the proposed algorithms on several domains, the most complex of which is simulated adaptive packet routing in a telecommunication network.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper proposes a hybrid coordination method for behavior-based control architectures. The hybrid method takes advantages of the robustness and modularity in competitive approaches as well as optimized trajectories in cooperative ones. This paper shows the feasibility of applying this hybrid method with a 3D-navigation to an autonomous underwater vehicle (AUV). The behaviors are learnt online by means of reinforcement learning. A continuous Q-learning implemented with a feed-forward neural network is employed. Realistic simulations were carried out. The results obtained show the good performance of the hybrid method on behavior coordination as well as the convergence of the behaviors

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper presents a hybrid behavior-based scheme using reinforcement learning for high-level control of autonomous underwater vehicles (AUVs). Two main features of the presented approach are hybrid behavior coordination and semi on-line neural-Q_learning (SONQL). Hybrid behavior coordination takes advantages of robustness and modularity in the competitive approach as well as efficient trajectories in the cooperative approach. SONQL, a new continuous approach of the Q_learning algorithm with a multilayer neural network is used to learn behavior state/action mapping online. Experimental results show the feasibility of the presented approach for AUVs

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper proposes a field application of a high-level reinforcement learning (RL) control system for solving the action selection problem of an autonomous robot in cable tracking task. The learning system is characterized by using a direct policy search method for learning the internal state/action mapping. Policy only algorithms may suffer from long convergence times when dealing with real robotics. In order to speed up the process, the learning phase has been carried out in a simulated environment and, in a second step, the policy has been transferred and tested successfully on a real robot. Future steps plan to continue the learning process on-line while on the real robot while performing the mentioned task. We demonstrate its feasibility with real experiments on the underwater robot ICTINEU AUV

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Autonomous underwater vehicles (AUV) represent a challenging control problem with complex, noisy, dynamics. Nowadays, not only the continuous scientific advances in underwater robotics but the increasing number of subsea missions and its complexity ask for an automatization of submarine processes. This paper proposes a high-level control system for solving the action selection problem of an autonomous robot. The system is characterized by the use of reinforcement learning direct policy search methods (RLDPS) for learning the internal state/action mapping of some behaviors. We demonstrate its feasibility with simulated experiments using the model of our underwater robot URIS in a target following task

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper proposes a high-level reinforcement learning (RL) control system for solving the action selection problem of an autonomous robot. Although the dominant approach, when using RL, has been to apply value function based algorithms, the system here detailed is characterized by the use of direct policy search methods. Rather than approximating a value function, these methodologies approximate a policy using an independent function approximator with its own parameters, trying to maximize the future expected reward. The policy based algorithm presented in this paper is used for learning the internal state/action mapping of a behavior. In this preliminary work, we demonstrate its feasibility with simulated experiments using the underwater robot GARBI in a target reaching task

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Research Skills Presentation

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The reinforcement omission effects have been traditionally interpreted in terms of: behavioral facilitation after reinforcement omission induced by primary frustration or behavioral suppression after reinforcement delivery induced by postconsummatory states. The studies reviewed here indicate that amygdala is involved in modulation of these effects. However, the fact that amygdala lesions, extensive or selective, can eliminate, reduce and enhance the omission effects makes it difficult to understand how it is the exact nature of their involvement. The amygdala is related to several functions that depend on its connections with other brain systems. Thus, it is necessary to consider the involvement of a more complex neural network in the modulation of the reinforcement omission effects. The connection of amygdala subareas to cortical and subcortical structures may be involved in this modulation since they also are linked to processes related to reward and expectancy.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this paper, we employ techniques from artificial intelligence such as reinforcement learning and agent based modeling as building blocks of a computational model for an economy based on conventions. First we model the interaction among firms in the private sector. These firms behave in an information environment based on conventions, meaning that a firm is likely to behave as its neighbors if it observes that their actions lead to a good pay off. On the other hand, we propose the use of reinforcement learning as a computational model for the role of the government in the economy, as the agent that determines the fiscal policy, and whose objective is to maximize the growth of the economy. We present the implementation of a simulator of the proposed model based on SWARM, that employs the SARSA(λ) algorithm combined with a multilayer perceptron as the function approximation for the action value function.