943 resultados para MEMORY SYSTEMS INTERACTION


Relevância:

80.00% 80.00%

Publicador:

Resumo:

J.L., then a 25-year-old physiotherapist, became densely amnesic following herpes simplex encephalitis. She displayed severe retrograde amnesia, category-specific semantic memory loss, and a profound anterograde amnesia affecting both verbal and visual memory. Her working memory systems were relatively spared as were most of her cognitive problem-solving abilities, but her social functioning was grossly impaired. She was able to demonstrate several previously learned physiotherapy skills, but was unable to modify her application of these procedures in accordance with patient response. She showed no memory of theoretical or propositional knowledge, and could neither plan treatment or reason clinically. Three years later, J.L. had profound impairment of anterograde and retrograde declarative memory, with relative sparing of working memory for problem solving and long-term memory of procedural skills. The theoretical and practical implications of her amnesic syndrome are discussed.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Unstructured mesh based codes for the modelling of continuum physics phenomena have evolved to provide the facility to model complex interacting systems. Such codes have the potential to provide a high performance on parallel platforms for a small investment in programming. The critical parameters for success are to minimise changes to the code to allow for maintenance while providing high parallel efficiency, scalability to large numbers of processors and portability to a wide range of platforms. The paradigm of domain decomposition with message passing has for some time been demonstrated to provide a high level of efficiency, scalability and portability across shared and distributed memory systems without the need to re-author the code into a new language. This paper addresses these issues in the parallelisation of a complex three dimensional unstructured mesh Finite Volume multiphysics code and discusses the implications of automating the parallelisation process.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The difficulties encountered in implementing large scale CM codes on multiprocessor systems are now fairly well understood. Despite the claims of shared memory architecture manufacturers to provide effective parallelizing compilers, these have not proved to be adequate for large or complex programs. Significant programmer effort is usually required to achieve reasonable parallel efficiencies on significant numbers of processors. The paradigm of Single Program Multi Data (SPMD) domain decomposition with message passing, where each processor runs the same code on a subdomain of the problem, communicating through exchange of messages, has for some time been demonstrated to provide the required level of efficiency, scalability, and portability across both shared and distributed memory systems, without the need to re-author the code into a new language or even to support differing message passing implementations. Extension of the methods into three dimensions has been enabled through the engineering of PHYSICA, a framework for supporting 3D, unstructured mesh and continuum mechanics modeling. In PHYSICA, six inspectors are used. Part of the challenge for automation of parallelization is being able to prove the equivalence of inspectors so that they can be merged into as few as possible.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

A dissertation submitted in fulfillment of the requirements to the degree of Master in Computer Science and Computer Engineering

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The performance, energy efficiency and cost improvements due to traditional technology scaling have begun to slow down and present diminishing returns. Underlying reasons for this trend include fundamental physical limits of transistor scaling, the growing significance of quantum effects as transistors shrink, and a growing mismatch between transistors and interconnects regarding size, speed and power. Continued Moore's Law scaling will not come from technology scaling alone, and must involve improvements to design tools and development of new disruptive technologies such as 3D integration. 3D integration presents potential improvements to interconnect power and delay by translating the routing problem into a third dimension, and facilitates transistor density scaling independent of technology node. Furthermore, 3D IC technology opens up a new architectural design space of heterogeneously-integrated high-bandwidth CPUs. Vertical integration promises to provide the CPU architectures of the future by integrating high performance processors with on-chip high-bandwidth memory systems and highly connected network-on-chip structures. Such techniques can overcome the well-known CPU performance bottlenecks referred to as memory and communication wall. However the promising improvements to performance and energy efficiency offered by 3D CPUs does not come without cost, both in the financial investments to develop the technology, and the increased complexity of design. Two main limitations to 3D IC technology have been heat removal and TSV reliability. Transistor stacking creates increases in power density, current density and thermal resistance in air cooled packages. Furthermore the technology introduces vertical through silicon vias (TSVs) that create new points of failure in the chip and require development of new BEOL technologies. Although these issues can be controlled to some extent using thermal-reliability aware physical and architectural 3D design techniques, high performance embedded cooling schemes, such as micro-fluidic (MF) cooling, are fundamentally necessary to unlock the true potential of 3D ICs. A new paradigm is being put forth which integrates the computational, electrical, physical, thermal and reliability views of a system. The unification of these diverse aspects of integrated circuits is called Co-Design. Independent design and optimization of each aspect leads to sub-optimal designs due to a lack of understanding of cross-domain interactions and their impacts on the feasibility region of the architectural design space. Co-Design enables optimization across layers with a multi-domain view and thus unlocks new high-performance and energy efficient configurations. Although the co-design paradigm is becoming increasingly necessary in all fields of IC design, it is even more critical in 3D ICs where, as we show, the inter-layer coupling and higher degree of connectivity between components exacerbates the interdependence between architectural parameters, physical design parameters and the multitude of metrics of interest to the designer (i.e. power, performance, temperature and reliability). In this dissertation we present a framework for multi-domain co-simulation and co-optimization of 3D CPU architectures with both air and MF cooling solutions. Finally we propose an approach for design space exploration and modeling within the new Co-Design paradigm, and discuss the possible avenues for improvement of this work in the future.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

A cross-sectional study was carried out to examine the pattern of changes in the capacity to coordinate attention between two simultaneously performed tasks in a group of 570 volunteers, from 5 to 17 years old. Method: The results revealed that the ability to coordinate attention increases with age, reaching adult values by age 15 years. Also, these results were compared with the performance in the same dual task of healthy elderly and Alzheimer disease (AD) patients found in a previous study. Results: The analysis indicated that AD patients showed a lower dual-tasking capacity than 5-year-old children, whereas the elderly presented a significantly higher ability than 5-year-old children and no significant differences with respect to young adults. Conclusion: These findings may suggest the presence of a working memory system’s mechanism that enables the division of attention, which is strengthened by the maturation of prefrontal cortex, and impaired in AD. (J. of Att. Dis. 2016; 20(2) 87-95)

Relevância:

80.00% 80.00%

Publicador:

Resumo:

CDKL5 (cyclin-dependent kinase-like 5) deficiency disorder (CDD) is a rare and severe neurodevelopmental disease that mostly affects girls who are heterozygous for mutations in the X-linked CDKL5 gene. The lack of CDKL5 protein expression or function leads to the appearance of numerous clinical features, including early-onset seizures, marked hypotonia, autistic features, and severe neurodevelopmental impairment. Mouse models of CDD, Cdkl5 KO mice, exhibit several behavioral phenotypes that mimic CDD features, such as impaired learning and memory, social interaction, and motor coordination. CDD symptomatology, along with the high CDKL5 expression levels in the brain, underscores the critical role that CDKL5 plays in proper brain development and function. Nevertheless, the improvement of the clinical overview of CDD in the past few years has defined a more detailed phenotypic spectrum; this includes very common alterations in peripheral organ and tissue function, such as gastrointestinal problems, irregular breathing, hypotonia, and scoliosis, suggesting that CDKL5 deficiency compromises not only CNS function but also that of other organs/tissues. Here we report, for the first time, that a mouse model of CDD, the heterozygous Cdkl5 KO (Cdkl5 +/-) female mouse, exhibits cardiac functional and structural abnormalities. The mice also showed QTc prolongation and increased heart rate. These changes correlate with a marked decrease in parasympathetic activity to the heart and in the expression of the Scn5a and Hcn4 voltage-gated channels. Moreover, the Cdkl5 +/- heart shows typical signs of heart aging, including increased fibrosis, mitochondrial dysfunctions, and increased ROS production. Overall, our study not only contributes to the understanding of the role of CDKL5 in heart structure/function but also documents a novel preclinical phenotype for future therapeutic investigation.

Relevância:

50.00% 50.00%

Publicador:

Resumo:

Adrenocorticotropin (ACM) and alpha-melanocyte stimulating hormone (alpha-MSH) are peptides which present many physiological effects related to pigmentation, motor and sexual behavior, learning and memory, analgesia, anti-inflammatory and antipyretic processes. The 13 amino acid residues of alpha-MSH are the same initial sequence of ACM and due to the presence of a tryptophan residue in position 9 of the peptide chain, fluorescence techniques could be used to investigate the conformational properties of the hormones in different environments and the mechanisms of interaction with biomimetic systems like sodium dodecyl sulphate (SDS) micelles, sodium dodecyl sulphate-poly(ethylene oxide) (SDS-PEO) aggregates and neutral polymeric micelles. In buffer solution, fluorescence parameters were typical of peptides containing tryptophan exposed to the aqueous medium and upon addition of surfactant and polymer molecules, the gradual change of those parameters demonstrated the interaction of the peptides with the microheterogeneous systems. From time-resolved experiments it was shown that the interaction proceeded with conformational changes in both peptides, and further information was obtained from quenching of Trp fluorescence by a family of N-alkylpyridinium ions, which possess affinity to the microheterogeneous systems dependent on the length of the alkyl chain. The quenching of Trp fluorescence was enhanced in the presence of charged micelles, compared to the buffer solution and the accessibility of the fluorophore to the quencher was dependent on the peptide and the alkylpyridinium: in ACTH(1-21) highest collisional constants were obtained using ethylpyridinium as quencher, indicating a location of the residue in the surface of the micelle, while in alpha-MSH the best quencher was hexylpyridinium, indicating insertion of the residue into the non-polar region of the micelles. The results had shown that the interaction between the peptides and the biomimetic systems where driven by combined electrostatic and hydrophobic effects: in ACTH(1-24) the electrostatic interaction between highly positively charged C-terminal and negatively charged surface of micelles; and aggregates predominates over hydrophobic interactions involving residues in the central region of the peptide; in alpha-MSH, which presents one residual positive charge, the hydrophobic interactions are relevant to position the Trp residue in the non-polar region of the microheterogeneous systems. (C) 2008 Elsevier B.V. All rights reserved.

Relevância:

50.00% 50.00%

Publicador:

Resumo:

We consider two weakly coupled systems and adopt a perturbative approach based on the Ruelle response theory to study their interaction. We propose a systematic way of parameterizing the effect of the coupling as a function of only the variables of a system of interest. Our focus is on describing the impacts of the coupling on the long term statistics rather than on the finite-time behavior. By direct calculation, we find that, at first order, the coupling can be surrogated by adding a deterministic perturbation to the autonomous dynamics of the system of interest. At second order, there are additionally two separate and very different contributions. One is a term taking into account the second-order contributions of the fluctuations in the coupling, which can be parameterized as a stochastic forcing with given spectral properties. The other one is a memory term, coupling the system of interest to its previous history, through the correlations of the second system. If these correlations are known, this effect can be implemented as a perturbation with memory on the single system. In order to treat this case, we present an extension to Ruelle's response theory able to deal with integral operators. We discuss our results in the context of other methods previously proposed for disentangling the dynamics of two coupled systems. We emphasize that our results do not rely on assuming a time scale separation, and, if such a separation exists, can be used equally well to study the statistics of the slow variables and that of the fast variables. By recursively applying the technique proposed here, we can treat the general case of multi-level systems.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

In a local production system (LPS), besides external economies, the interaction, cooperation, and learning are indicated by the literature as complementary ways of enhancing the LPS's competitiveness and gains. In Brazil, the greater part of LPSs, mostly composed by small enterprises, displays incipient relationships and low levels of interaction and cooperation among their actors. The size of the participating enterprises itself for specificities that engender organizational constraints, which, in turn, can have a considerable impact on their relationships and learning dynamics. For that reason, it is the purpose of this article to present an analysis of interaction, cooperation, and learning relationships among several types of actors pertaining to an LPS in the farming equipment and machinery sector, bearing in mind the specificities of small enterprises. To this end, the fieldwork carried out in this study aimed at: (i) investigating external and internal knowledge sources conducive to learning and (ii) identifying and analyzing motivating and inhibiting factors related to specificities of small enterprises in order to bring the LPS members closer together and increase their cooperation and interaction. Empirical evidence shows that internal aspects of the enterprises, related to management and infrastructure, can have a strong bearing on their joint actions, interaction and learning processes.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

While some recent frameworks on cognitive agents addressed the combination of mental attitudes with deontic concepts, they commonly ignore the representation of time. An exception is [1]that manages also some temporal aspects both with respect to cognition and normative provisions. We propose in this paper an extension of the logic presented in [1]with temporal intervals.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

A number of theoretical and experimental investigations have been made into the nature of purlin-sheeting systems over the past 30 years. These systems commonly consist of cold-formed zed or channel section purlins, connected to corrugated sheeting. They have proven difficult to model due to the complexity of both the purlin deformation and the restraint provided to the purlin by the sheeting. Part 1 of this paper presented a non-linear elasto plastic finite element model which, by incorporating both the purlin and the sheeting in the analysis, allowed the interaction between the two components of the system to be modelled. This paper presents a simplified version of the first model which has considerably decreased requirements in terms of computer memory, running time and data preparation. The Simplified Model includes only the purlin but allows for the sheeting's shear and rotational restraints by modelling these effects as springs located at the purlin-sheeting connections. Two accompanying programs determine the stiffness of these springs numerically. As in the Full Model, the Simplified Model is able to account for the cross-sectional distortion of the purlin, the shear and rotational restraining effects of the sheeting, and failure of the purlin by local buckling or yielding. The model requires no experimental or empirical input and its validity is shown by its goon con elation with experimental results. (C) 1997 Elsevier Science Ltd.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

This article presents the results of a research to understand the conditions of interaction between work and three specific information systems (ISs) used in the Brazilian banking sector. We sought to understand how systems are redesigned in work practices, and how work is modified by the insertion of new systems. Data gathering included 46 semi-structured interviews, together with an analysis of system-related documents. We tried to identify what is behind the practices that modify the ISs and work. The data analysis revealed an operating structure: a combination of different practices ensuring that the interaction between agents and systems will take place. We discovered a structure of reciprocal conversion caused by the increased technical skills of the agent and the humanization of the systems. It is through ongoing adjustment between work and ISs that technology is tailored to the context and people become more prepared to handle with technology.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

We are working on the confluence of knowledge management, organizational memory and emergent knowledge with the lens of complex adaptive systems. In order to be fundamentally sustainable organizations search for an adaptive need for managing ambidexterity of day-to-day work and innovation. An organization is an entity of a systemic nature, composed of groups of people who interact to achieve common objectives, making it necessary to capture, store and share interactions knowledge with the organization, this knowledge can be generated in intra-organizational or inter-organizational level. The organizations have organizational memory of knowledge of supported on the Information technology and systems. Each organization, especially in times of uncertainty and radical changes, to meet the demands of the environment, needs timely and sized knowledge on the basis of tacit and explicit. This sizing is a learning process resulting from the interaction that emerges from the relationship between the tacit and explicit knowledge and which we are framing within an approach of Complex Adaptive Systems. The use of complex adaptive systems for building the emerging interdependent relationship, will produce emergent knowledge that will improve the organization unique developing.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The recent trends of chip architectures with higher number of heterogeneous cores, and non-uniform memory/non-coherent caches, brings renewed attention to the use of Software Transactional Memory (STM) as a fundamental building block for developing parallel applications. Nevertheless, although STM promises to ease concurrent and parallel software development, it relies on the possibility of aborting conflicting transactions to maintain data consistency, which impacts on the responsiveness and timing guarantees required by embedded real-time systems. In these systems, contention delays must be (efficiently) limited so that the response times of tasks executing transactions are upper-bounded and task sets can be feasibly scheduled. In this paper we assess the use of STM in the development of embedded real-time software, defending that the amount of contention can be reduced if read-only transactions access recent consistent data snapshots, progressing in a wait-free manner. We show how the required number of versions of a shared object can be calculated for a set of tasks. We also outline an algorithm to manage conflicts between update transactions that prevents starvation.