985 resultados para 291602 Memory Structures


Relevância:

30.00% 30.00%

Publicador:

Resumo:

This dissertation concerns active fibre-reinforced composites with embedded shape memory alloy wires. The structural application of active materials allows to develop adaptive structures which actively respond to changes in the environment, such as morphing structures, self-healing structures and power harvesting devices. In particular, shape memory alloy actuators integrated within a composite actively control the structural shape or stiffness, thus influencing the composite static and dynamic properties. Envisaged applications include, among others, the prevention of thermal buckling of the outer skin of air vehicles, shape changes in panels for improved aerodynamic characteristics and the deployment of large space structures. The study and design of active composites is a complex and multidisciplinary topic, requiring in-depth understanding of both the coupled behaviour of active materials and the interaction between the different composite constituents. Both fibre-reinforced composites and shape memory alloys are extremely active research topics, whose modelling and experimental characterisation still present a number of open problems. Thus, while this dissertation focuses on active composites, some of the research results presented here can be usefully applied to traditional fibre-reinforced composites or other shape memory alloy applications. The dissertation is composed of four chapters. In the first chapter, active fibre-reinforced composites are introduced by giving an overview of the most common choices available for the reinforcement, matrix and production process, together with a brief introduction and classification of active materials. The second chapter presents a number of original contributions regarding the modelling of fibre-reinforced composites. Different two-dimensional laminate theories are derived from a parent three-dimensional theory, introducing a procedure for the a posteriori reconstruction of transverse stresses along the laminate thickness. Accurate through the thickness stresses are crucial for the composite modelling as they are responsible for some common failure mechanisms. A new finite element based on the First-order Shear Deformation Theory and a hybrid stress approach is proposed for the numerical solution of the two-dimensional laminate problem. The element is simple and computationally efficient. The transverse stresses through the laminate thickness are reconstructed starting from a general finite element solution. A two stages procedure is devised, based on Recovery by Compatibility in Patches and three-dimensional equilibrium. Finally, the determination of the elastic parameters of laminated structures via numerical-experimental Bayesian techniques is investigated. Two different estimators are analysed and compared, leading to the definition of an alternative procedure to improve convergence of the estimation process. The third chapter focuses on shape memory alloys, describing their properties and applications. A number of constitutive models proposed in the literature, both one-dimensional and three-dimensional, are critically discussed and compared, underlining their potential and limitations, which are mainly related to the definition of the phase diagram and the choice of internal variables. Some new experimental results on shape memory alloy material characterisation are also presented. These experimental observations display some features of the shape memory alloy behaviour which are generally not included in the current models, thus some ideas are proposed for the development of a new constitutive model. The fourth chapter, finally, focuses on active composite plates with embedded shape memory alloy wires. A number of di®erent approaches can be used to predict the behaviour of such structures, each model presenting different advantages and drawbacks related to complexity and versatility. A simple model able to describe both shape and stiffness control configurations within the same context is proposed and implemented. The model is then validated considering the shape control configuration, which is the most sensitive to model parameters. The experimental work is divided in two parts. In the first part, an active composite is built by gluing prestrained shape memory alloy wires on a carbon fibre laminate strip. This structure is relatively simple to build, however it is useful in order to experimentally demonstrate the feasibility of the concept proposed in the first part of the chapter. In the second part, the making of a fibre-reinforced composite with embedded shape memory alloy wires is investigated, considering different possible choices of materials and manufacturing processes. Although a number of technological issues still need to be faced, the experimental results allow to demonstrate the mechanism of shape control via embedded shape memory alloy wires, while showing a good agreement with the proposed model predictions.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The present study was performed to validate a spatial working memory task using pharmacological manipulations. The water escape T-maze, which combines the advantages of the Morris water maze and the T-maze while minimizes the disadvantages, was used. Scopolamine, a drug that affects cognitive function in spatial working memory tasks, significantly decreased the rat performance in the present delayed alternation task. Since glutamate neurotransmission plays an important role in the maintaining of working memory, we evaluated the effect of ionotropic and metabotropic glutamatergic receptors antagonists, administered alone or in combination, on rat behaviour. As the acquisition and performance of memory tasks has been linked to the expression of the immediately early gene cFos, a marker of neuronal activation, we also investigated the neurochemical correlates of the water escape T-maze after pharmacological treatment with glutamatergic antagonists, in various brain areas. Moreover, we focused our attention on the involvement of perirhinal cortex glutamatergic neurotransmission in the acquisition and/or consolidation of this particular task. The perirhinal cortex has strong and reciprocal connections with both specific cortical sensory areas and some memory-related structures, including the hippocampal formation and amygdala. For its peculiar position, perirhinal cortex has been recently regarded as a key region in working memory processes, in particular in providing temporary maintenance of information. The effect of perirhinal cortex lesions with ibotenic acid on the acquisition and consolidation of the water escape T-maze task was evaluated. In conclusion, our data suggest that the water escape T-maze could be considered a valid, simple and quite fast method to assess spatial working memory, sensible to pharmacological manipulations. Following execution of the task, we observed cFos expression in several brain regions. Furthermore, in accordance to literature, our results suggest that glutamatergic neurotransmission plays an important role in the acquisition and consolidation of working memory processes.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In vielen Industriezweigen, zum Beispiel in der Automobilindustrie, werden Digitale Versuchsmodelle (Digital MockUps) eingesetzt, um die Konstruktion und die Funktion eines Produkts am virtuellen Prototypen zu überprüfen. Ein Anwendungsfall ist dabei die Überprüfung von Sicherheitsabständen einzelner Bauteile, die sogenannte Abstandsanalyse. Ingenieure ermitteln dabei für bestimmte Bauteile, ob diese in ihrer Ruhelage sowie während einer Bewegung einen vorgegeben Sicherheitsabstand zu den umgebenden Bauteilen einhalten. Unterschreiten Bauteile den Sicherheitsabstand, so muss deren Form oder Lage verändert werden. Dazu ist es wichtig, die Bereiche der Bauteile, welche den Sicherhabstand verletzen, genau zu kennen. rnrnIn dieser Arbeit präsentieren wir eine Lösung zur Echtzeitberechnung aller den Sicherheitsabstand unterschreitenden Bereiche zwischen zwei geometrischen Objekten. Die Objekte sind dabei jeweils als Menge von Primitiven (z.B. Dreiecken) gegeben. Für jeden Zeitpunkt, in dem eine Transformation auf eines der Objekte angewendet wird, berechnen wir die Menge aller den Sicherheitsabstand unterschreitenden Primitive und bezeichnen diese als die Menge aller toleranzverletzenden Primitive. Wir präsentieren in dieser Arbeit eine ganzheitliche Lösung, welche sich in die folgenden drei großen Themengebiete unterteilen lässt.rnrnIm ersten Teil dieser Arbeit untersuchen wir Algorithmen, die für zwei Dreiecke überprüfen, ob diese toleranzverletzend sind. Hierfür präsentieren wir verschiedene Ansätze für Dreiecks-Dreiecks Toleranztests und zeigen, dass spezielle Toleranztests deutlich performanter sind als bisher verwendete Abstandsberechnungen. Im Fokus unserer Arbeit steht dabei die Entwicklung eines neuartigen Toleranztests, welcher im Dualraum arbeitet. In all unseren Benchmarks zur Berechnung aller toleranzverletzenden Primitive beweist sich unser Ansatz im dualen Raum immer als der Performanteste.rnrnDer zweite Teil dieser Arbeit befasst sich mit Datenstrukturen und Algorithmen zur Echtzeitberechnung aller toleranzverletzenden Primitive zwischen zwei geometrischen Objekten. Wir entwickeln eine kombinierte Datenstruktur, die sich aus einer flachen hierarchischen Datenstruktur und mehreren Uniform Grids zusammensetzt. Um effiziente Laufzeiten zu gewährleisten ist es vor allem wichtig, den geforderten Sicherheitsabstand sinnvoll im Design der Datenstrukturen und der Anfragealgorithmen zu beachten. Wir präsentieren hierzu Lösungen, die die Menge der zu testenden Paare von Primitiven schnell bestimmen. Darüber hinaus entwickeln wir Strategien, wie Primitive als toleranzverletzend erkannt werden können, ohne einen aufwändigen Primitiv-Primitiv Toleranztest zu berechnen. In unseren Benchmarks zeigen wir, dass wir mit unseren Lösungen in der Lage sind, in Echtzeit alle toleranzverletzenden Primitive zwischen zwei komplexen geometrischen Objekten, bestehend aus jeweils vielen hunderttausend Primitiven, zu berechnen. rnrnIm dritten Teil präsentieren wir eine neuartige, speicheroptimierte Datenstruktur zur Verwaltung der Zellinhalte der zuvor verwendeten Uniform Grids. Wir bezeichnen diese Datenstruktur als Shrubs. Bisherige Ansätze zur Speicheroptimierung von Uniform Grids beziehen sich vor allem auf Hashing Methoden. Diese reduzieren aber nicht den Speicherverbrauch der Zellinhalte. In unserem Anwendungsfall haben benachbarte Zellen oft ähnliche Inhalte. Unser Ansatz ist in der Lage, den Speicherbedarf der Zellinhalte eines Uniform Grids, basierend auf den redundanten Zellinhalten, verlustlos auf ein fünftel der bisherigen Größe zu komprimieren und zur Laufzeit zu dekomprimieren.rnrnAbschießend zeigen wir, wie unsere Lösung zur Berechnung aller toleranzverletzenden Primitive Anwendung in der Praxis finden kann. Neben der reinen Abstandsanalyse zeigen wir Anwendungen für verschiedene Problemstellungen der Pfadplanung.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This project addresses the unreliability of operating system code, in particular in device drivers. Device driver software is the interface between the operating system and the device's hardware. Device drivers are written in low level code, making them difficult to understand. Almost all device drivers are written in the programming language C which allows for direct manipulation of memory. Due to the complexity of manual movement of data, most mistakes in operating systems occur in device driver code. The programming language Clay can be used to check device driver code at compile-time. Clay does most of its error checking statically to minimize the overhead of run-time checks in order to stay competitive with C's performance time. The Clay compiler can detect a lot more types of errors than the C compiler like buffer overflows, kernel stack overflows, NULL pointer uses, freed memory uses, and aliasing errors. Clay code that successfully compiles is guaranteed to run without failing on errors that Clay can detect. Even though C is unsafe, currently most device drivers are written in it. Not only are device drivers the part of the operating system most likely to fail, they also are the largest part of the operating system. As rewriting every existing device driver in Clay by hand would be impractical, this thesis is part of a project to automate translation of existing drivers from C to Clay. Although C and Clay both allow low level manipulation of data and fill the same niche for developing low level code, they have different syntax, type systems, and paradigms. This paper explores how C can be translated into Clay. It identifies what part of C device drivers cannot be translated into Clay and what information drivers in Clay will require that C cannot provide. It also explains how these translations will occur by explaining how each C structure is represented in the compiler and how these structures are changed to represent a Clay structure.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Concurrency control is mostly based on locks and is therefore notoriously difficult to use. Even though some programming languages provide high-level constructs, these add complexity and potentially hard-to-detect bugs to the application. Transactional memory is an attractive mechanism that does not have the drawbacks of locks, however the underlying implementation is often difficult to integrate into an existing language. In this paper we show how we have introduced transactional semantics into Smalltalk by using the reflective facilities of the language. Our approach is based on method annotations, incremental parse tree transformations and an optimistic commit protocol. The implementation does not depend on modifications to the virtual machine and therefore can be changed at the language level. We report on a practical case study, benchmarks and further and on-going work.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Visual working memory (VWM) involves maintaining and processing visual information, often for the purpose of making immediate decisions. Neuroimaging experiments of VWM provide evidence in support of a neural system mainly involving a fronto-parietal neuronal network, but the role of specific brain areas is less clear. A proposal that has recently generated considerable debate suggests that a dissociation of object and location VWM occurs within the prefrontal cortex, in dorsal and ventral regions, respectively. However, re-examination of the relevant literature presents a more robust distribution suggestive of a general caudal-rostral dissociation from occipital and parietal structures, caudally, to prefrontal regions, rostrally, corresponding to location and object memory, respectively. The purpose of the present study was to identify a dissociation of location and object VWM across two imaging methods (magnetoencephalography, MEG, and functional magnetic imaging, fMRI). These two techniques provide complimentary results due the high temporal resolution of MEG and the high spatial resolution of fMRI. The use of identical location and object change detection tasks was employed across techniques and reported for the first time. Moreover, this study is the first to use matched stimulus displays across location and object VWM conditions. The results from these two imaging methods provided convergent evidence of a location and object VWM dissociation favoring a general caudal-rostral rather than the more common prefrontal dorsal-ventral view. Moreover, neural activity across techniques was correlated with behavioral performance for the first time and provided convergent results. This novel approach of combining imaging tools to study memory resulted in robust evidence suggesting a novel interpretation of location and object memory. Accordingly, this study presents a novel context within which to explore the neural substrates of WM across imaging techniques and populations.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Performing a prospective memory task repeatedly changes the nature of the task from episodic to habitual. The goal of the present study was to investigate the neural basis of this transition. In two experiments, we contrasted event-related potentials (ERPs) evoked by correct responses to prospective memory targets in the first, more episodic part of the experiment with those of the second, more habitual part of the experiment. Specifically, we tested whether the early, middle, or late ERP-components, which are thought to reflect cue detection, retrieval of the intention, and post-retrieval processes, respectively, would be changed by routinely performing the prospective memory task. The results showed a differential ERP effect in the middle time window (450 - 650 ms post-stimulus). Source localization using low resolution brain electromagnetic tomography analysis (LORETA) suggests that the transition was accompanied by an increase of activation in the posterior parietal and occipital cortex. These findings indicate that habitual prospective memory involves retrieval processes guided more strongly by parietal brain structures. In brief, the study demonstrates that episodic and habitual prospective memory tasks recruit different brain areas.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper, we describe our research on bio-inspired locomotion systems using deformable structures and smart materials, concretely shape memory alloys (SMAs). These types of materials allow us to explore the possibility of building motor-less and gear-less robots. A swimming underwater fish-like robot has been developed whose movements are generated using SMAs. These actuators are suitable for bending the continuous backbone of the fish, which in turn causes a change in the curvature of the body. This type of structural arrangement is inspired by fish red muscles, which are mainly recruited during steady swimming for the bending of a flexible but nearly incompressible structure such as the fishbone. This paper reviews the design process of these bio-inspired structures, from the motivations and physiological inspiration to the mechatronics design, control and simulations, leading to actual experimental trials and results. The focus of this work is to present the mechanisms by which standard swimming patterns can be reproduced with the proposed design. Moreover, the performance of the SMA-based actuators’ control in terms of actuation speed and position accuracy is also addressed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Precise modeling of the program heap is fundamental for understanding the behavior of a program, and is thus of signiflcant interest for many optimization applications. One of the fundamental properties of the heap that can be used in a range of optimization techniques is the sharing relationships between the elements in an array or collection. If an analysis can determine that the memory locations pointed to by different entries of an array (or collection) are disjoint, then in many cases loops that traverse the array can be vectorized or transformed into a thread-parallel versión. This paper introduces several novel sharing properties over the concrete heap and corresponding abstractions to represent them. In conjunction with an existing shape analysis technique, these abstractions allow us to precisely resolve the sharing relations in a wide range of heap structures (arrays, collections, recursive data structures, composite heap structures) in a computationally efflcient manner. The effectiveness of the approach is evaluated on a set of challenge problems from the JOlden and SPECjvm98 suites. Sharing information obtained from the analysis is used to achieve substantial thread-level parallel speedups.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Modeling the evolution of the state of program memory during program execution is critical to many parallehzation techniques. Current memory analysis techniques either provide very accurate information but run prohibitively slowly or produce very conservative results. An approach based on abstract interpretation is presented for analyzing programs at compile time, which can accurately determine many important program properties such as aliasing, logical data structures and shape. These properties are known to be critical for transforming a single threaded program into a versión that can be run on múltiple execution units in parallel. The analysis is shown to be of polynomial complexity in the size of the memory heap. Experimental results for benchmarks in the Jolden suite are given. These results show that in practice the analysis method is efflcient and is capable of accurately determining shape information in programs that créate and manipúlate complex data structures.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Los tipos de datos concurrentes son implementaciones concurrentes de las abstracciones de datos clásicas, con la diferencia de que han sido específicamente diseñados para aprovechar el gran paralelismo disponible en las modernas arquitecturas multiprocesador y multinúcleo. La correcta manipulación de los tipos de datos concurrentes resulta esencial para demostrar la completa corrección de los sistemas de software que los utilizan. Una de las mayores dificultades a la hora de diseñar y verificar tipos de datos concurrentes surge de la necesidad de tener que razonar acerca de un número arbitrario de procesos que invocan estos tipos de datos de manera concurrente. Esto requiere considerar sistemas parametrizados. En este trabajo estudiamos la verificación formal de propiedades temporales de sistemas concurrentes parametrizados, poniendo especial énfasis en programas que manipulan estructuras de datos concurrentes. La principal dificultad a la hora de razonar acerca de sistemas concurrentes parametrizados proviene de la interacción entre el gran nivel de concurrencia que éstos poseen y la necesidad de razonar al mismo tiempo acerca de la memoria dinámica. La verificación de sistemas parametrizados resulta en sí un problema desafiante debido a que requiere razonar acerca de estructuras de datos complejas que son accedidas y modificadas por un numero ilimitado de procesos que manipulan de manera simultánea el contenido de la memoria dinámica empleando métodos de sincronización poco estructurados. En este trabajo, presentamos un marco formal basado en métodos deductivos capaz de ocuparse de la verificación de propiedades de safety y liveness de sistemas concurrentes parametrizados que manejan estructuras de datos complejas. Nuestro marco formal incluye reglas de prueba y técnicas especialmente adaptadas para sistemas parametrizados, las cuales trabajan en colaboración con procedimientos de decisión especialmente diseñados para analizar complejas estructuras de datos concurrentes. Un aspecto novedoso de nuestro marco formal es que efectúa una clara diferenciación entre el análisis del flujo de control del programa y el análisis de los datos que se manejan. El flujo de control del programa se analiza utilizando reglas de prueba y técnicas de verificación deductivas especialmente diseñadas para lidiar con sistemas parametrizados. Comenzando a partir de un programa concurrente y la especificación de una propiedad temporal, nuestras técnicas deductivas son capaces de generar un conjunto finito de condiciones de verificación cuya validez implican la satisfacción de dicha especificación temporal por parte de cualquier sistema, sin importar el número de procesos que formen parte del sistema. Las condiciones de verificación generadas se corresponden con los datos manipulados. Estudiamos el diseño de procedimientos de decisión especializados capaces de lidiar con estas condiciones de verificación de manera completamente automática. Investigamos teorías decidibles capaces de describir propiedades de tipos de datos complejos que manipulan punteros, tales como implementaciones imperativas de pilas, colas, listas y skiplists. Para cada una de estas teorías presentamos un procedimiento de decisión y una implementación práctica construida sobre SMT solvers. Estos procedimientos de decisión son finalmente utilizados para verificar de manera automática las condiciones de verificación generadas por nuestras técnicas de verificación parametrizada. Para concluir, demostramos como utilizando nuestro marco formal es posible probar no solo propiedades de safety sino además de liveness en algunas versiones de protocolos de exclusión mutua y programas que manipulan estructuras de datos concurrentes. El enfoque que presentamos en este trabajo resulta ser muy general y puede ser aplicado para verificar un amplio rango de tipos de datos concurrentes similares. Abstract Concurrent data types are concurrent implementations of classical data abstractions, specifically designed to exploit the great deal of parallelism available in modern multiprocessor and multi-core architectures. The correct manipulation of concurrent data types is essential for the overall correctness of the software system built using them. A major difficulty in designing and verifying concurrent data types arises by the need to reason about any number of threads invoking the data type simultaneously, which requires considering parametrized systems. In this work we study the formal verification of temporal properties of parametrized concurrent systems, with a special focus on programs that manipulate concurrent data structures. The main difficulty to reason about concurrent parametrized systems comes from the combination of their inherently high concurrency and the manipulation of dynamic memory. This parametrized verification problem is very challenging, because it requires to reason about complex concurrent data structures being accessed and modified by threads which simultaneously manipulate the heap using unstructured synchronization methods. In this work, we present a formal framework based on deductive methods which is capable of dealing with the verification of safety and liveness properties of concurrent parametrized systems that manipulate complex data structures. Our framework includes special proof rules and techniques adapted for parametrized systems which work in collaboration with specialized decision procedures for complex data structures. A novel aspect of our framework is that it cleanly differentiates the analysis of the program control flow from the analysis of the data being manipulated. The program control flow is analyzed using deductive proof rules and verification techniques specifically designed for coping with parametrized systems. Starting from a concurrent program and a temporal specification, our techniques generate a finite collection of verification conditions whose validity entails the satisfaction of the temporal specification by any client system, in spite of the number of threads. The verification conditions correspond to the data manipulation. We study the design of specialized decision procedures to deal with these verification conditions fully automatically. We investigate decidable theories capable of describing rich properties of complex pointer based data types such as stacks, queues, lists and skiplists. For each of these theories we present a decision procedure, and its practical implementation on top of existing SMT solvers. These decision procedures are ultimately used for automatically verifying the verification conditions generated by our specialized parametrized verification techniques. Finally, we show how using our framework it is possible to prove not only safety but also liveness properties of concurrent versions of some mutual exclusion protocols and programs that manipulate concurrent data structures. The approach we present in this work is very general, and can be applied to verify a wide range of similar concurrent data types.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In humans declarative or explicit memory is supported by the hippocampus and related structures of the medial temporal lobe working in concert with the cerebral cortex. This paper reviews our progress in developing an animal model for studies of cortical–hippocampal interactions in memory processing. Our findings support the view that the cortex maintains various forms of memory representation and that hippocampal structures extend the persistence and mediate the organization of these codings. Specifically, the parahippocampal region, through direct and reciprocal interconnections with the cortex, is sufficient to support the convergence and extended persistence of cortical codings. The hippocampus itself is critical to the organization cortical representations in terms of relationships among items in memory and in the flexible memory expression that is the hallmark of declarative memory.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A fundamental question about memory and cognition concerns how information is acquired about categories and concepts as the result of encounters with specific instances. We describe a profoundly amnesic patient (E.P.) who cannot learn and remember specific instances--i.e., he has no detectable declarative memory. Yet after inspecting a series of 40 training stimuli, he was normal at classifying novel stimuli according to whether they did or did not belong to the same category as the training stimuli. In contrast, he was unable to recognize a single stimulus after it was presented 40 times in succession. These findings demonstrate that the ability to classify novel items, after experience with other items in the same category, is a separate and parallel memory function of the brain, independent of the limbic and diencephalic structures essential for remembering individual stimulus items (declarative memory). Category-level knowledge can be acquired implicitly by cumulating information from multiple training examples in the absence of detectable conscious memory for the examples themselves.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The present study has assessed the replicative history and the residual replicative potential of human naive and memory T cells. Telomeres are unique terminal chromosomal structures whose length has been shown to decrease with cell division in vitro and with increased age in vivo for human somatic cells. We therefore assessed telomere length as a measure of the in vivo replicative history of naive and memory human T cells. Telomeric terminal restriction fragments were found to be 1.4 +/- 0.1 kb longer in CD4+ naive T cells than in memory cells from the same donors, a relationship that remained constant over a wide range of donor age. These findings suggest that the differentiation of memory cells from naive precursors occurs with substantial clonal expansion and that the magnitude of this expansion is, on average, similar over a wide range of age. In addition, when replicative potential was assessed in vitro, it was found that the capacity of naive cells for cell division was 128-fold greater as measured in mean population doublings than the capacity of memory cells from the same individuals. Human CD4+ naive and memory cells thus differ in in vivo replicative history, as reflected in telomeric length, and in their residual replicative capacity.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We tested amnesic patients, patients with frontal lobe lesions, and control subjects with the deferred imitation task, a nonverbal test used to demonstrate memory abilities in human infants. On day 1, subjects were given sets of objects to obtain a baseline measure of their spontaneous performance of target actions. Then different event sequences were modeled with the object sets. On day 2, the objects were given to the subjects again, first without any instructions to imitate the sequences, and then with explicit instructions to imitate the actions exactly as they had been modeled. Control subjects and frontal lobe patients reproduced the events under both uninstructed and instructed conditions. In contrast, performance by the amnesic patients did not significantly differ from that of a second control group who had the same opportunities to handle the objects but were not shown the modeled actions. These findings suggest that deferred imitation is dependent on the brain structures essential for declarative memory that are damaged in amnesia, and they support the view that infants who imitate actions after long delays have an early capacity for long-term declarative memory.