124 resultados para GFRP reinforcement


Relevância:

10.00% 10.00%

Publicador:

Resumo:

While underactuated robotic systems are capable of energy efficient and rapid dynamic behavior, we still do not fully understand how body dynamics can be actively used for adaptive behavior in complex unstructured environment. In particular, we can expect that the robotic systems could achieve high maneuverability by flexibly storing and releasing energy through the motor control of the physical interaction between the body and the environment. This paper presents a minimalistic optimization strategy of motor control policy for underactuated legged robotic systems. Based on a reinforcement learning algorithm, we propose an optimization scheme, with which the robot can exploit passive elasticity for hopping forward while maintaining the stability of locomotion process in the environment with a series of large changes of ground surface. We show a case study of a simple one-legged robot which consists of a servomotor and a passive elastic joint. The dynamics and learning performance of the robot model are tested in simulation, and then transferred the results to the real-world robot. ©2007 IEEE.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

As observed in nature, complex locomotion can be generated based on an adequate combination of motor primitives. In this context, the paper focused on experiments which result in the development of a quality criterion for the design and analysis of motor primitives. First, the impact of different vocabularies on behavioural diversity, robustness of prelearned behaviours and learning process is elaborated. The experiments are performed with the quadruped robot MiniDog6M for which a running and standing up behaviour is implemented. Further, a reinforcement learning approach based on Q-learning is introduced which is used to select an adequate sequence of motor primitives. © 2006 Springer-Verlag Berlin Heidelberg.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Computer simulation experiments were performed to examine the effectiveness of OR- and comparative-reinforcement learning algorithms. In the simulation, human rewards were given as +1 and -1. Two models of human instruction that determine which reward is to be given in every step of a human instruction were used. Results show that human instruction may have a possibility of including both model-A and model-B characteristics, and it can be expected that the comparative-reinforcement learning algorithm is more effective for learning by human instructions.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The ability of large-grain (RE)Ba2Cu3O7-δ ((RE)BCO; RE = rare earth) bulk superconductors to trap magnetic fields is determined by their critical current. With high trapped fields, however, bulk samples are subject to a relatively large Lorentz force, and their performance is limited primarily by their tensile strength. Consequently, sample reinforcement is the key to performance improvement in these technologically important materials. In this work, we report a trapped field of 17.6 T, the largest reported to date, in a stack of two silver-doped GdBCO superconducting bulk samples, each 25 mm in diameter, fabricated by top-seeded melt growth and reinforced with shrink-fit stainless steel. This sample preparation technique has the advantage of being relatively straightforward and inexpensive to implement, and offers the prospect of easy access to portable, high magnetic fields without any requirement for a sustaining current source. © 2014 IOP Publishing Ltd.