961 resultados para Memory models


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Signal integration determines cell fate on the cellular level, affects cognitive processes and affective responses on the behavioural level, and is likely to be involved in psychoneurobiological processes underlying mood disorders. Interactions between stimuli may subjected to time effects. Time-dependencies of interactions between stimuli typically lead to complex cell responses and complex responses on the behavioural level. We show that both three-factor models and time series models can be used to uncover such time-dependencies. However, we argue that for short longitudinal data the three factor modelling approach is more suitable. In order to illustrate both approaches, we re-analysed previously published short longitudinal data sets. We found that in human embryonic kidney 293 cells cells the interaction effect in the regulation of extracellular signal-regulated kinase (ERK) 1 signalling activation by insulin and epidermal growth factor is subjected to a time effect and dramatically decays at peak values of ERK activation. In contrast, we found that the interaction effect induced by hypoxia and tumour necrosis factor-alpha for the transcriptional activity of the human cyclo-oxygenase-2 promoter in HEK293 cells is time invariant at least in the first 12-h time window after stimulation. Furthermore, we applied the three-factor model to previously reported animal studies. In these studies, memory storage was found to be subjected to an interaction effect of the beta-adrenoceptor agonist clenbuterol and certain antagonists acting on the alpha-1-adrenoceptor / glucocorticoid-receptor system. Our model-based analysis suggests that only if the antagonist drug is administer in a critical time window, then the interaction effect is relevant.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The generating functional method is employed to investigate the synchronous dynamics of Boolean networks, providing an exact result for the system dynamics via a set of macroscopic order parameters. The topology of the networks studied and its constituent Boolean functions represent the system's quenched disorder and are sampled from a given distribution. The framework accommodates a variety of topologies and Boolean function distributions and can be used to study both the noisy and noiseless regimes; it enables one to calculate correlation functions at different times that are inaccessible via commonly used approximations. It is also used to determine conditions for the annealed approximation to be valid, explore phases of the system under different levels of noise and obtain results for models with strong memory effects, where existing approximations break down. Links between Boolean networks and general Boolean formulas are identified and results common to both system types are highlighted. © 2012 Copyright Taylor and Francis Group, LLC.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Over recent years, evidence has been accumulating in favour of the importance of long-term information as a variable which can affect the success of short-term recall. Lexicality, word frequency, imagery and meaning have all been shown to augment short term recall performance. Two competing theories as to the causes of this long-term memory influence are outlined and tested in this thesis. The first approach is the order-encoding account, which ascribes the effect to the usage of resources at encoding, hypothesising that word lists which require less effort to process will benefit from increased levels of order encoding, in turn enhancing recall success. The alternative view, trace redintegration theory, suggests that order is automatically encoded phonologically, and that long-term information can only influence the interpretation of the resultant memory trace. The free recall experiments reported here attempted to determine the importance of order encoding as a facilitatory framework and to determine the locus of the effects of long-term information in free recall. Experiments 1 and 2 examined the effects of word frequency and semantic categorisation over a filled delay, and experiments 3 and 4 did the same for immediate recall. Free recall was improved by both long-term factors tested. Order information was not used over a short filled delay, but was evident in immediate recall. Furthermore, it was found that both long-term factors increased the amount of order information retained. Experiment 5 induced an order encoding effect over a filled delay, leaving a picture of short-term processes which are closely associated with long-term processes, and which fit conceptions of short-term memory being part of language processes rather better than either the encoding or the retrieval-based models. Experiments 6 and 7 aimed to determine to what extent phonological processes were responsible for the pattern of results observed. Articulatory suppression affected the encoding of order information where speech rate had no direct influence, suggesting that it is ease of lexical access which is the most important factor in the influence of long-term memory on immediate recall tasks. The evidence presented in this thesis does not offer complete support for either the retrieval-based account or the order encoding account of long-term influence. Instead, the evidence sits best with models that are based upon language-processing. The path urged for future research is to find ways in which this diffuse model can be better specified, and which can take account of the versatility of the human brain.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A critical review of the auditory selective attention literature is presented, particular reference is made to methodological issues arising from the asymmetrical hemispheric representation of language in the context of the dominant research technique dichotic shadowing. Subsequently the concept of cerebral localization is introduced, and the experimental literature with reference to models of laterality effects in speech and audition discussed. The review indicated the importance of hemispheric asymmetries insofar as they might influence the results of dichotic shadowing tasks. It is suggested that there is a potential overlap between models of selective attention and hemispheric differences. In Experiment I, ~ a key experiment in auditory selective attention is replicated and by exercising control over possible laterality effects some of the conflicting results of earlier studies were reconciled. The three subsequent experiments, II, III and IV, are concerned with the recall of verbally shadowed inputs. A highly significant and consistent effect of ear of arrival upon the serial position of items recalled is reported. Experiment V is directed towards an analysis of the effect that the processing of unattended inputs has upon the serial position of attended items that are recalled. A significant effect of the type of unattended material upon the recall of attended items was found to be influenced by the ear of arrival of inputs. In Experiment VI, differences between the two ears as attended and unattended input channels were clarified. Two main conclusions were drawn from this work. First, that the dichotic shadowing technique cannot control attention. Instead the task aprocessing both channels of dichotic inputs is unevenly shared bet\'reen the hemispheres as a function of the ear shadowed. Consequently, evidence for the processing of unattended information is considered in terms of constraints imposed by asymmetries in the functional organization of language, not in terms of a limited processing capacity model. The second conclusion to be drawn is that laterality differences can be effectively examined using the dichotic shadowing technique, a new model of laterality differences is proposed and discussed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This study is concerned with several proposals concerning multiprocessor systems and with the various possible methods of evaluating such proposals. After a discussion of the advantages and disadvantages of several performance evaluation tools, the author decides that simulation is the only tool powerful enough to develop a model which would be of practical use, in the design, comparison and extension of systems. The main aims of the simulation package developed as part of this study are cost effectiveness, ease of use and generality. The methodology on which the simulation package is based is described in detail. The fundamental principles are that model design should reflect actual systems design, that measuring procedures should be carried out alongside design that models should be well documented and easily adaptable and that models should be dynamic. The simulation package itself is modular, and in this way reflects current design trends. This approach also aids documentation and ensures that the model is easily adaptable. It contains a skeleton structure and a library of segments which can be added to or directly swapped with segments of the skeleton structure, to form a model which fits a user's requirements. The study also contains the results of some experimental work carried out using the model, the first part of which tests• the model's capabilities by simulating a large operating system, the ICL George 3 system; the second part deals with general questions and some of the many proposals concerning multiprocessor systems.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Swarm intelligence is a popular paradigm for algorithm design. Frequently drawing inspiration from natural systems, it assigns simple rules to a set of agents with the aim that, through local interactions, they collectively solve some global problem. Current variants of a popular swarm based optimization algorithm, particle swarm optimization (PSO), are investigated with a focus on premature convergence. A novel variant, dispersive PSO, is proposed to address this problem and is shown to lead to increased robustness and performance compared to current PSO algorithms. A nature inspired decentralised multi-agent algorithm is proposed to solve a constrained problem of distributed task allocation. Agents must collect and process the mail batches, without global knowledge of their environment or communication between agents. New rules for specialisation are proposed and are shown to exhibit improved eciency and exibility compared to existing ones. These new rules are compared with a market based approach to agent control. The eciency (average number of tasks performed), the exibility (ability to react to changes in the environment), and the sensitivity to load (ability to cope with differing demands) are investigated in both static and dynamic environments. A hybrid algorithm combining both approaches, is shown to exhibit improved eciency and robustness. Evolutionary algorithms are employed, both to optimize parameters and to allow the various rules to evolve and compete. We also observe extinction and speciation. In order to interpret algorithm performance we analyse the causes of eciency loss, derive theoretical upper bounds for the eciency, as well as a complete theoretical description of a non-trivial case, and compare these with the experimental results. Motivated by this work we introduce agent "memory" (the possibility for agents to develop preferences for certain cities) and show that not only does it lead to emergent cooperation between agents, but also to a signicant increase in efficiency.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

On the basis of convolutional (Hamming) version of recent Neural Network Assembly Memory Model (NNAMM) for intact two-layer autoassociative Hopfield network optimal receiver operating characteristics (ROCs) have been derived analytically. A method of taking into account explicitly a priori probabilities of alternative hypotheses on the structure of information initiating memory trace retrieval and modified ROCs (mROCs, a posteriori probabilities of correct recall vs. false alarm probability) are introduced. The comparison of empirical and calculated ROCs (or mROCs) demonstrates that they coincide quantitatively and in this way intensities of cues used in appropriate experiments may be estimated. It has been found that basic ROC properties which are one of experimental findings underpinning dual-process models of recognition memory can be explained within our one-factor NNAMM.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Mathematics Subject Classification: 26A33, 45K05, 60J60, 60G50, 65N06, 80-99.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Chinese-English bilingual students were randomly assigned to three reading conditions: In the English-English (E-E) condition (n = 44), a text in English was read twice; in the English-Chinese (E-C) condition (n = 30), the English text was read first and its Chinese translation was read second; in the Chinese-English (C-E) condition (n = 30), the Chinese text was read first and English second. An expected explicit memory test on propositions in the format of sentence verification was given followed by an unexpected implicit memory test on unfamiliar word-forms.^ Analyses of covariance were conducted with explicit and implicit memory scores as the dependent variables, reading condition (bilingual versus monolingual) as the independent variable, and TOEFL reading score as the covariate.^ The results showed that the bilingual reading groups outperformed the monolingual reading group on explicit memory tested by sentence-verification but not on implicit memory tested by forced-choice word-identification, implying that bilingual representation facilitates explicit memory of propositional information but not implicit memory of lexical forms. The findings were interpreted as consistent with separate bilingual memory-storage models and the implications of such models in the study of cognitive structures were discussed in relationship to issues of dual coding theory, multiple memory systems, and the linguistic relativity philosophy. ^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This dissertation explored memory conformity effects on people who interacted with a confederate and of bystanders to that interaction. Two studies were carried out. Study 1 was conducted in the field. A male confederate approached a group of people at the beach and had a brief interaction. About a minute later a research assistant approached the group and administered a target-absent lineup to each person in the group. Analyses revealed that memory conformity occurred during the lineup task. Bystanders were twice as likely to conform as those who interacted with the confederate. Study 2 was carried out in a laboratory under controlled conditions. Participants were exposed to two events during their time in the laboratory. In one event, participants were shown a brief video with no determinate roles assigned. In the other event participants were randomly assigned to interact with a confederate (actor condition) or to witness that interaction (bystander condition). Participants were given memory tests on both events to understand the effects of participant role (actor vs. bystander) on memory conformity. Participants answered second to all questions, following a confederate acting as a participant, who disseminated misinformation on critical questions. Analyses revealed no significant differences in memory conformity between actors and bystanders during the movie memory task. However, differences were found for the interaction memory task such that bystanders conformed more than actors on two of four critical questions. Bystanders also conformed more than actors during a lineup identification task. The results of these studies suggest that the role a person plays in an interaction affects how susceptible they are to information from a co-witness. Theoretical and applied implications are discussed. First, the results are explained through the use of two models of memory. Second, recommendations are made for forensic investigators.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The rapid growth of virtualized data centers and cloud hosting services is making the management of physical resources such as CPU, memory, and I/O bandwidth in data center servers increasingly important. Server management now involves dealing with multiple dissimilar applications with varying Service-Level-Agreements (SLAs) and multiple resource dimensions. The multiplicity and diversity of resources and applications are rendering administrative tasks more complex and challenging. This thesis aimed to develop a framework and techniques that would help substantially reduce data center management complexity.^ We specifically addressed two crucial data center operations. First, we precisely estimated capacity requirements of client virtual machines (VMs) while renting server space in cloud environment. Second, we proposed a systematic process to efficiently allocate physical resources to hosted VMs in a data center. To realize these dual objectives, accurately capturing the effects of resource allocations on application performance is vital. The benefits of accurate application performance modeling are multifold. Cloud users can size their VMs appropriately and pay only for the resources that they need; service providers can also offer a new charging model based on the VMs performance instead of their configured sizes. As a result, clients will pay exactly for the performance they are actually experiencing; on the other hand, administrators will be able to maximize their total revenue by utilizing application performance models and SLAs. ^ This thesis made the following contributions. First, we identified resource control parameters crucial for distributing physical resources and characterizing contention for virtualized applications in a shared hosting environment. Second, we explored several modeling techniques and confirmed the suitability of two machine learning tools, Artificial Neural Network and Support Vector Machine, to accurately model the performance of virtualized applications. Moreover, we suggested and evaluated modeling optimizations necessary to improve prediction accuracy when using these modeling tools. Third, we presented an approach to optimal VM sizing by employing the performance models we created. Finally, we proposed a revenue-driven resource allocation algorithm which maximizes the SLA-generated revenue for a data center.^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The paper develops a novel realized matrix-exponential stochastic volatility model of multivariate returns and realized covariances that incorporates asymmetry and long memory (hereafter the RMESV-ALM model). The matrix exponential transformation guarantees the positivedefiniteness of the dynamic covariance matrix. The contribution of the paper ties in with Robert Basmann’s seminal work in terms of the estimation of highly non-linear model specifications (“Causality tests and observationally equivalent representations of econometric models”, Journal of Econometrics, 1988, 39(1-2), 69–104), especially for developing tests for leverage and spillover effects in the covariance dynamics. Efficient importance sampling is used to maximize the likelihood function of RMESV-ALM, and the finite sample properties of the quasi-maximum likelihood estimator of the parameters are analysed. Using high frequency data for three US financial assets, the new model is estimated and evaluated. The forecasting performance of the new model is compared with a novel dynamic realized matrix-exponential conditional covariance model. The volatility and co-volatility spillovers are examined via the news impact curves and the impulse response functions from returns to volatility and co-volatility.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Attention, the cognitive means by which we prioritize the processing of a subset of information, is necessary for operating efficiently and effectively in the world. Thus, a critical theoretical question is how information is selected. In the visual domain, working memory (WM)—which refers to the short-term maintenance and manipulation of information that is no longer accessible by the senses—has been highlighted as an important determinant of what is selected by visual attention. Furthermore, although WM and attention have traditionally been conceived as separate cognitive constructs, an abundance of behavioral and neural evidence indicates that these two domains are in fact intertwined and overlapping. The aim of this dissertation is to better understand the nature of WM and attention, primarily through the phenomenon of memory-based attentional guidance, whereby the active maintenance of items in visual WM reliably biases the deployment of attention to memory-matching items in the visual environment. The research presented here employs a combination of behavioral, functional imaging, and computational modeling techniques that address: (1) WM guidance effects with respect to the traditional dichotomy of top-down versus bottom-up attentional control; (2) under what circumstances the contents of WM impact visual attention; and (3) the broader hypothesis of a predictive and competitive interaction between WM and attention. Collectively, these empirical findings reveal the importance of WM as a distinct factor in attentional control and support current models of multiple-state WM, which may have broader implications for how we select and maintain information.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The most robust neurocognitive effect of marijuana use is memory impairment. Memory deficits are also high among persons living with HIV/AIDS, and marijuana use among this population is disproportionately common. Yet research examining neurocognitive outcomes resulting from co-occurring marijuana and HIV is virtually non-existent. The primary aim of this case-controlled study was to identify patterns of neurocognitive impairment among HIV patients who used marijuana compared to HIV patients who did not use drugs by comparing the groups on domain T-scores. Participants included 32 current marijuana users and 37 non-drug users. A comprehensive battery assessed substance use and neurocognitive functioning. Among the full sample, marijuana users performed significantly worse on verbal memory tasks compared to non-drug users and significantly better on attention/working memory tasks. A secondary aim of this study was to test whether the effect of marijuana use on memory was moderated by HIV disease progression, but these models were not significant. This study also examined whether the effect of marijuana use was differentially affected by marijuana use characteristics, finding that earlier age of initiation was associated with worse memory performance. These findings have important clinical implications, particularly given increased legalization of this drug to manage HIV infection.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Field-programmable gate arrays are ideal hosts to custom accelerators for signal, image, and data processing but de- mand manual register transfer level design if high performance and low cost are desired. High-level synthesis reduces this design burden but requires manual design of complex on-chip and off-chip memory architectures, a major limitation in applications such as video processing. This paper presents an approach to resolve this shortcoming. A constructive process is described that can derive such accelerators, including on- and off-chip memory storage from a C description such that a user-defined throughput constraint is met. By employing a novel statement-oriented approach, dataflow intermediate models are derived and used to support simple ap- proaches for on-/off-chip buffer partitioning, derivation of custom on-chip memory hierarchies and architecture transformation to ensure user-defined throughput constraints are met with minimum cost. When applied to accelerators for full search motion estima- tion, matrix multiplication, Sobel edge detection, and fast Fourier transform, it is shown how real-time performance up to an order of magnitude in advance of existing commercial HLS tools is enabled whilst including all requisite memory infrastructure. Further, op- timizations are presented that reduce the on-chip buffer capacity and physical resource cost by up to 96% and 75%, respectively, whilst maintaining real-time performance.