951 resultados para static random access memory


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Three populations of neurons expressing the vesicular glutamate transporter 2 (Vglut2) were recently described in the A10 area of the mouse midbrain, of which two populations were shown to express the gene encoding, the rate-limiting enzyme for catecholamine synthesis, tyrosine hydroxylase (TH).One of these populations (‘‘TH– Vglut2 Class1’’) also expressed the dopamine transporter (DAT) gene while one did not ("TH–Vglut2 Class2"), and the remaining population did not express TH at all ("TH-Vglut2-only"). TH is known to be expressed by a promoter which shows two phases of activation, a transient one early during embryonal development, and a later one which gives rise to stable endogenous expression of the TH gene. The transient phase is, however, not specific to catecholaminergic neurons, a feature taken to advantage here as it enabled Vglut2 gene targeting within all three A10 populations expressing this gene, thus creating a new conditional knockout. These knockout mice showed impairment in spatial memory function. Electrophysiological analyses revealed a profound alteration of oscillatory activity in the CA3 region of the hippocampus. In addition to identifying a novel role for Vglut2 in hippocampus function, this study points to the need for improved genetic tools for targeting of the diversity of subpopulations of the A10 area

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The discovery of the mural on the walls of the church of Santa Maria de Arbas of the town of Leon Gordaliza del Pino has been a revelation. The funerary monument called Knight Gordali-za addition to its artistic value , has a historical value that try to reflect in this study and that clears some questions about the his-tory of the kingdom of Leon, and specifically about the lineage of Ansúrez.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The secretaries of the secret were an essential element of the Holy Office’s district courts. They were in charge of record-ing and writing all of the official documents of these tribunals, but also of keeping the archive in order. And not only of these, so they were not the simple bureaucrats that the traditional historians wrote about. In fact, their long working hours turned a unique and much defined office into a complex taxonomy of professionals who shared the secretaries of the secret’s concerns but not their privileges. This paper aims to go in depth into these profession-als’ current life. They were not officials, but they take care of some important duties even if they were not getting paid for it.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

As the complexity of parallel applications increase, the performance limitations resulting from computational load imbalance become dominant. Mapping the problem space to the processors in a parallel machine in a manner that balances the workload of each processors will typically reduce the run-time. In many cases the computation time required for a given calculation cannot be predetermined even at run-time and so static partition of the problem returns poor performance. For problems in which the computational load across the discretisation is dynamic and inhomogeneous, for example multi-physics problems involving fluid and solid mechanics with phase changes, the workload for a static subdomain will change over the course of a computation and cannot be estimated beforehand. For such applications the mapping of loads to process is required to change dynamically, at run-time in order to maintain reasonable efficiency. The issue of dynamic load balancing are examined in the context of PHYSICA, a three dimensional unstructured mesh multi-physics continuum mechanics computational modelling code.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A method is outlined for optimising graph partitions which arise in mapping unstructured mesh calculations to parallel computers. The method employs a relative gain iterative technique to both evenly balance the workload and minimise the number and volume of interprocessor communications. A parallel graph reduction technique is also briefly described and can be used to give a global perspective to the optimisation. The algorithms work efficiently in parallel as well as sequentially and when combined with a fast direct partitioning technique (such as the Greedy algorithm) to give an initial partition, the resulting two-stage process proves itself to be both a powerful and flexible solution to the static graph-partitioning problem. Experiments indicate that the resulting parallel code can provide high quality partitions, independent of the initial partition, within a few seconds. The algorithms can also be used for dynamic load-balancing, reusing existing partitions and in this case the procedures are much faster than static techniques, provide partitions of similar or higher quality and, in comparison, involve the migration of a fraction of the data.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In cognitive tests, animals are often given a choice between two options and obtain a reward if they choose correctly. We investigated whether task format affects subjects' performance in a physical cognition test. In experiment 1, a two-choice memory test, 15 marmosets, Callithrix jacchus, had to remember the location of a food reward over time delays of increasing duration. We predicted that their performance would decline with increasing delay, but this was not found. One possible explanation was that the subjects were not sufficiently motivated to choose correctly when presented with only two options because in each trial they had a 50% chance of being rewarded. In experiment 2, we explored this possibility by testing eight naïve marmosets and seven squirrel monkeys, Saimiri sciureus, with both the traditional two-choice and a new nine-choice version of the memory test that increased the cost of a wrong choice. We found that task format affected the monkeys' performance. When choosing between nine options, both species performed better and their performance declined as delays became longer. Our results suggest that the two-choice format compromises the assessment of physical cognition, at least in memory tests with these New World monkeys, whereas providing more options, which decreases the probability of obtaining a reward when making a random guess, improves both performance and measurement validity of memory. Our findings suggest that two-choice tasks should be used with caution in comparisons within and across species because they are prone to motivational biases.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Problems in subject access to information organization systems have been under investigation for a long time. Focusing on item-level information discovery and access, researchers have identified a range of subject access problems, including quality and application of metadata, as well as the complexity of user knowledge required for successful subject exploration. While aggregations of digital collections built in the United States and abroad generate collection-level metadata of various levels of granularity and richness, no research has yet focused on the role of collection-level metadata in user interaction with these aggregations. This dissertation research sought to bridge this gap by answering the question “How does collection-level metadata mediate scholarly subject access to aggregated digital collections?” This goal was achieved using three research methods: • in-depth comparative content analysis of collection-level metadata in three large-scale aggregations of cultural heritage digital collections: Opening History, American Memory, and The European Library • transaction log analysis of user interactions, with Opening History, and • interview and observation data on academic historians interacting with two aggregations: Opening History and American Memory. It was found that subject-based resource discovery is significantly influenced by collection-level metadata richness. The richness includes such components as: 1) describing collection’s subject matter with mutually-complementary values in different metadata fields, and 2) a variety of collection properties/characteristics encoded in the free-text Description field, including types and genres of objects in a digital collection, as well as topical, geographic and temporal coverage are the most consistently represented collection characteristics in free-text Description fields. Analysis of user interactions with aggregations of digital collections yields a number of interesting findings. Item-level user interactions were found to occur more often than collection-level interactions. Collection browse is initiated more often than search, while subject browse (topical and geographic) is used most often. Majority of collection search queries fall within FRBR Group 3 categories: object, concept, and place. Significantly more object, concept, and corporate body searches and less individual person, event and class of persons searches were observed in collection searches than in item searches. While collection search is most often satisfied by Description and/or Subjects collection metadata fields, it would not retrieve a significant proportion of collection records without controlled-vocabulary subject metadata (Temporal Coverage, Geographic Coverage, Subjects, and Objects), and free-text metadata (the Description field). Observation data shows that collection metadata records in Opening History and American Memory aggregations are often viewed. Transaction log data show a high level of engagement with collection metadata records in Opening History, with the total page views for collections more than 4 times greater than item page views. Scholars observed viewing collection records valued descriptive information on provenance, collection size, types of objects, subjects, geographic coverage, and temporal coverage information. They also considered the structured display of collection metadata in Opening History more useful than the alternative approach taken by other aggregations, such as American Memory, which displays only the free-text Description field to the end-user. The results extend the understanding of the value of collection-level subject metadata, particularly free-text metadata, for the scholarly users of aggregations of digital collections. The analysis of the collection metadata created by three large-scale aggregations provides a better understanding of collection-level metadata application patterns and suggests best practices. This dissertation is also the first empirical research contribution to test the FRBR model as a conceptual and analytic framework for studying collection-level subject access.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

OSAN, R. , TORT, A. B. L. , AMARAL, O. B. . A mismatch-based model for memory reconsolidation and extinction in attractor networks. Plos One, v. 6, p. e23113, 2011.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Across 3 experiments, the effect of different styles of note taking, summary and access to notes was examined for memory for the details contained in a witness interview. In Experiment 1, participants (N = 40) were asked to either take notes or listen as they watched a witness interview. In Experiment 2, participants (N = 84) were asked to either take notes in one of three ways (i.e., conventional, linear, spidergraph) or listen as they watched a witness interview. In Experiment 3, participants (N = 112) were asked to take notes using the conventional or spidergraph method of note taking while they watched a witness interview and were subsequently given an opportunity to review their notes or sit quietly. Participants were then either granted access to their notes during testing or were not provided with their notes. Results of the first two experiments revealed that note takers outperformed listeners. Experiment 2 showed that conventional note takers outperformed those who used organizational styles of note taking, and post-hoc analyses revealed that recall performance was associated with note quality. Experiment 3 showed that participants who had access to their notes performed the best. The implications of these findings for police training programs in investigative interviewing are discussed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The power-law size distributions obtained experimentally for neuronal avalanches are an important evidence of criticality in the brain. This evidence is supported by the fact that a critical branching process exhibits the same exponent t~3=2. Models at criticality have been employed to mimic avalanche propagation and explain the statistics observed experimentally. However, a crucial aspect of neuronal recordings has been almost completely neglected in the models: undersampling. While in a typical multielectrode array hundreds of neurons are recorded, in the same area of neuronal tissue tens of thousands of neurons can be found. Here we investigate the consequences of undersampling in models with three different topologies (two-dimensional, small-world and random network) and three different dynamical regimes (subcritical, critical and supercritical). We found that undersampling modifies avalanche size distributions, extinguishing the power laws observed in critical systems. Distributions from subcritical systems are also modified, but the shape of the undersampled distributions is more similar to that of a fully sampled system. Undersampled supercritical systems can recover the general characteristics of the fully sampled version, provided that enough neurons are measured. Undersampling in two-dimensional and small-world networks leads to similar effects, while the random network is insensitive to sampling density due to the lack of a well-defined neighborhood. We conjecture that neuronal avalanches recorded from local field potentials avoid undersampling effects due to the nature of this signal, but the same does not hold for spike avalanches. We conclude that undersampled branching-process-like models in these topologies fail to reproduce the statistics of spike avalanches.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Three populations of neurons expressing the vesicular glutamate transporter 2 (Vglut2) were recently described in the A10 area of the mouse midbrain, of which two populations were shown to express the gene encoding, the rate-limiting enzyme for catecholamine synthesis, tyrosine hydroxylase (TH).One of these populations (‘‘TH– Vglut2 Class1’’) also expressed the dopamine transporter (DAT) gene while one did not ("TH–Vglut2 Class2"), and the remaining population did not express TH at all ("TH-Vglut2-only"). TH is known to be expressed by a promoter which shows two phases of activation, a transient one early during embryonal development, and a later one which gives rise to stable endogenous expression of the TH gene. The transient phase is, however, not specific to catecholaminergic neurons, a feature taken to advantage here as it enabled Vglut2 gene targeting within all three A10 populations expressing this gene, thus creating a new conditional knockout. These knockout mice showed impairment in spatial memory function. Electrophysiological analyses revealed a profound alteration of oscillatory activity in the CA3 region of the hippocampus. In addition to identifying a novel role for Vglut2 in hippocampus function, this study points to the need for improved genetic tools for targeting of the diversity of subpopulations of the A10 area

Relevância:

30.00% 30.00%

Publicador:

Resumo:

La littérature suggère que le sommeil paradoxal joue un rôle dans l'intégration associative de la mémoire émotionnelle. De plus, les rêves en sommeil paradoxal, en particulier leur nature bizarre et émotionnelle, semblent refléter cette fonction associative et émotionnelle du sommeil paradoxal. La conséquence des cauchemars fréquents sur ce processus est inconnue, bien que le réveil provoqué par un cauchemar semble interférer avec les fonctions du sommeil paradoxal. Le premier objectif de cette thèse était de reproduire conceptuellement des recherches antérieures démontrant que le sommeil paradoxal permet un accès hyper-associatif à la mémoire. L'utilisation d'une sieste diurne nous a permis d'évaluer les effets du sommeil paradoxal, comparativement au sommeil lent et à l’éveil, sur la performance des participants à une tâche sémantique mesurant « associational breadth » (AB). Les résultats ont montré que seuls les sujets réveillés en sommeil paradoxal ont répondu avec des associations atypiques, ce qui suggère que le sommeil paradoxal est spécifique dans sa capacité à intégrer les traces de la mémoire émotionnelle (article 1). En outre, les rapports de rêve en sommeil paradoxal étaient plus bizarres que ceux en sommeil lent, et plus intenses émotionnellement ; ces attributs semblent refléter la nature associative et émotionnelle du sommeil paradoxal (article 2). Le deuxième objectif de la thèse était de préciser si et comment le traitement de la mémoire émotionnelle en sommeil paradoxal est altéré dans le Trouble de cauchemars fréquents (NM). En utilisant le même protocole, nos résultats ont montré que les participants NM avaient des résultats plus élevés avant une sieste, ce qui correspond aux observations antérieures voulant que les personnes souffrant de cauchemars soient plus créatives. Après le sommeil paradoxal, les deux groupes, NM et CTL, ont montré des changements similaires dans leur accès associatif, avec des résultats AB-négatif plus bas et AB-positif plus grands. Une semaine plus tard, seul les participants NM a maintenu ce changement dans leur réseau sémantique (article 3). Ces résultats suggèrent qu’au fil du temps, les cauchemars peuvent interférer avec l'intégration de la mémoire émotionnelle pendant le sommeil paradoxal. En ce qui concerne l'imagerie, les participants NM avaient plus de bizarrerie et plus d’émotion positive, mais pas négative, dans leurs rêveries (article 4). Ces attributs intensifiés suggèrent à nouveau que les participants NM sont plus imaginatifs et créatifs à l’éveil. Dans l'ensemble, les résultats confirment le rôle du sommeil paradoxal dans l'intégration associative de la mémoire émotionnelle. Cependant, nos résultats concernant le Trouble de cauchemars ne sont pas entièrement en accord avec les théories suggérant que les cauchemars sont dysfonctionnels. Le groupe NM a montré plus d’associativité émotionnelle, de même que plus d'imagerie positive et bizarre à l’éveil. Nous proposons donc une nouvelle théorie de sensibilité environnementale associée au Trouble de cauchemar, suggérant qu'une sensibilité accrue à une gamme de contextes environnementaux sous-tendrait les symptômes uniques et la richesse imaginative observés chez les personnes souffrant de cauchemars fréquents. Bien que davantage de recherches doivent être faites, il est possible que ces personnes puissent bénéficier e milieux favorables, et qu’elles puissent avoir un avantage adaptatif à l'égard de l'expression créative, ce qui est particulièrement pertinent lorsque l'on considère leur pronostic et les différents types de traitements.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Hardware vendors make an important effort creating low-power CPUs that keep battery duration and durability above acceptable levels. In order to achieve this goal and provide good performance-energy for a wide variety of applications, ARM designed the big.LITTLE architecture. This heterogeneous multi-core architecture features two different types of cores: big cores oriented to performance and little cores, slower and aimed to save energy consumption. As all the cores have access to the same memory, multi-threaded applications must resort to some mutual exclusion mechanism to coordinate the access to shared data by the concurrent threads. Transactional Memory (TM) represents an optimistic approach for shared-memory synchronization. To take full advantage of the features offered by software TM, but also benefit from the characteristics of the heterogeneous big.LITTLE architectures, our focus is to propose TM solutions that take into account the power/performance requirements of the application and what it is offered by the architecture. In order to understand the current state-of-the-art and obtain useful information for future power-aware software TM solutions, we have performed an analysis of a popular TM library running on top of an ARM big.LITTLE processor. Experiments show, in general, better scalability for the LITTLE cores for most of the applications except for one, which requires the computing performance that the big cores offer.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Driven by the ever-growing expectation of ubiquitous connectivity and the widespread adoption of IEEE 802.11 networks, it is not only highly demanded but also entirely possible for in-motion vehicles to establish convenient Internet access to roadside WiFi access points (APs) than ever before, which is referred to as Drive-Thru Internet. The performance of Drive-Thru Internet, however, would suffer from the high vehicle mobility, severe channel contentions, and instinct issues of the IEEE 802.11 MAC as it was originally designed for static scenarios. As an effort to address these problems, in this paper, we develop a unified analytical framework to evaluate the performance of Drive-Thru Internet, which can accommodate various vehicular traffic flow states, and to be compatible with IEEE 802.11a/b/g networks with a distributed coordination function (DCF). We first develop the mathematical analysis to evaluate the mean saturated throughput of vehicles and the transmitted data volume of a vehicle per drive-thru. We show that the throughput performance of Drive-Thru Internet can be enhanced by selecting an optimal transmission region within an AP's coverage for the coordinated medium sharing of all vehicles. We then develop a spatial access control management approach accordingly, which ensures the airtime fairness for medium sharing and boosts the throughput performance of Drive-Thru Internet in a practical, efficient, and distributed manner. Simulation results show that our optimal access control management approach can efficiently work in IEEE 802.11b and 802.11g networks. The maximal transmitted data volume per drive-thru can be enhanced by 113.1% and 59.5% for IEEE 802.11b and IEEE 802.11g networks with a DCF, respectively, compared with the normal IEEE 802.11 medium access with a DCF.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

IMPORTANCE: Working memory training may help children with attention and learning difficulties, but robust evidence from population-level randomized controlled clinical trials is lacking.

OBJECTIVE: To test whether a computerized adaptive working memory intervention program improves long-term academic outcomes of children 6 to 7 years of age with low working memory compared with usual classroom teaching.

DESIGN, SETTING, AND PARTICIPANTS: Population-based randomized controlled clinical trial of first graders from 44 schools in Melbourne, Australia, who underwent a verbal and visuospatial working memory screening. Children were classified as having low working memory if their scores were below the 15th percentile on either the Backward Digit Recall or Mister X subtest from the Automated Working Memory Assessment, or if their scores were below the 25th percentile on both. These children were randomly assigned by an independent statistician to either an intervention or a control arm using a concealed computerized random number sequence. Researchers were blinded to group assignment at time of screening. We conducted our trial from March 1, 2012, to February 1, 2015; our final analysis was on October 30, 2015. We used intention-to-treat analyses.

INTERVENTION: Cogmed working memory training, comprising 20 to 25 training sessions of 45 minutes' duration at school.

MAIN OUTCOMES AND MEASURES: Directly assessed (at 12 and 24 months) academic outcomes (reading, math, and spelling scores as primary outcomes) and working memory (also assessed at 6 months); parent-, teacher-, and child-reported behavioral and social-emotional functioning and quality of life; and intervention costs.

RESULTS: Of 1723 children screened (mean [SD] age, 6.9 [0.4] years), 226 were randomized to each arm (452 total), with 90% retention at 1 year and 88% retention at 2 years; 90.3% of children in the intervention arm completed at least 20 sessions. Of the 4 short-term and working memory outcomes, 1 outcome (visuospatial short-term memory) benefited the children at 6 months (effect size, 0.43 [95% CI, 0.25-0.62]) and 12 months (effect size, 0.49 [95% CI, 0.28-0.70]), but not at 24 months. There were no benefits to any other outcomes; in fact, the math scores of the children in the intervention arm were worse at 2 years (mean difference, -3.0 [95% CI, -5.4 to -0.7]; P = .01). Intervention costs were A$1035 per child.

CONCLUSIONS AND RELEVANCE: Working memory screening of children 6 to 7 years of age is feasible, and an adaptive working memory training program may temporarily improve visuospatial short-term memory. Given the loss of classroom time, cost, and lack of lasting benefit, we cannot recommend population-based delivery of Cogmed within a screening paradigm.