67 resultados para MEMORY SYSTEMS INTERACTION
em CentAUR: Central Archive University of Reading - UK
Resumo:
As people get older, they tend to remember more positive than negative information. This age-by-valence interaction has been called “positivity effect.” The current study addressed the hypotheses that baseline functional connectivity at rest is predictive of older adults' brain activity when learning emotional information and their positivity effect in memory. Using fMRI, we examined the relationship among resting-state functional connectivity, subsequent brain activity when learning emotional faces, and individual differences in the positivity effect (the relative tendency to remember faces expressing positive vs. negative emotions). Consistent with our hypothesis, older adults with a stronger positivity effect had increased functional coupling between amygdala and medial PFC (MPFC) during rest. In contrast, younger adults did not show the association between resting connectivity and memory positivity. A similar age-by-memory positivity interaction was also found when learning emotional faces. That is, memory positivity in older adults was associated with (a) enhanced MPFC activity when learning emotional faces and (b) increased negative functional coupling between amygdala and MPFC when learning negative faces. In contrast, memory positivity in younger adults was related to neither enhanced MPFC activity to emotional faces, nor MPFC–amygdala connectivity to negative faces. Furthermore, stronger MPFC–amygdala connectivity during rest was predictive of subsequent greater MPFC activity when learning emotional faces. Thus, emotion–memory interaction in older adults depends not only on the task-related brain activity but also on the baseline functional connectivity.
Resumo:
One among the most influential and popular data mining methods is the k-Means algorithm for cluster analysis. Techniques for improving the efficiency of k-Means have been largely explored in two main directions. The amount of computation can be significantly reduced by adopting geometrical constraints and an efficient data structure, notably a multidimensional binary search tree (KD-Tree). These techniques allow to reduce the number of distance computations the algorithm performs at each iteration. A second direction is parallel processing, where data and computation loads are distributed over many processing nodes. However, little work has been done to provide a parallel formulation of the efficient sequential techniques based on KD-Trees. Such approaches are expected to have an irregular distribution of computation load and can suffer from load imbalance. This issue has so far limited the adoption of these efficient k-Means variants in parallel computing environments. In this work, we provide a parallel formulation of the KD-Tree based k-Means algorithm for distributed memory systems and address its load balancing issue. Three solutions have been developed and tested. Two approaches are based on a static partitioning of the data set and a third solution incorporates a dynamic load balancing policy.
Resumo:
User interfaces have the primary role of enabling access to information meeting individual users' needs. However, the user-systems interaction is still rigid, especially in support of complex environments where various types of users are involved. Among the approaches for improving user interface agility, we present a normative approach to the design interfaces of web applications, which allow delivering users personalized services according to parameters extracted from the simulation of norms in the social context. A case study in an e-Government context is used to illustrate the implications of the approach.
Resumo:
The K-Means algorithm for cluster analysis is one of the most influential and popular data mining methods. Its straightforward parallel formulation is well suited for distributed memory systems with reliable interconnection networks. However, in large-scale geographically distributed systems the straightforward parallel algorithm can be rendered useless by a single communication failure or high latency in communication paths. This work proposes a fully decentralised algorithm (Epidemic K-Means) which does not require global communication and is intrinsically fault tolerant. The proposed distributed K-Means algorithm provides a clustering solution which can approximate the solution of an ideal centralised algorithm over the aggregated data as closely as desired. A comparative performance analysis is carried out against the state of the art distributed K-Means algorithms based on sampling methods. The experimental analysis confirms that the proposed algorithm is a practical and accurate distributed K-Means implementation for networked systems of very large and extreme scale.
Resumo:
Recently major processor manufacturers have announced a dramatic shift in their paradigm to increase computing power over the coming years. Instead of focusing on faster clock speeds and more powerful single core CPUs, the trend clearly goes towards multi core systems. This will also result in a paradigm shift for the development of algorithms for computationally expensive tasks, such as data mining applications. Obviously, work on parallel algorithms is not new per se but concentrated efforts in the many application domains are still missing. Multi-core systems, but also clusters of workstations and even large-scale distributed computing infrastructures provide new opportunities and pose new challenges for the design of parallel and distributed algorithms. Since data mining and machine learning systems rely on high performance computing systems, research on the corresponding algorithms must be on the forefront of parallel algorithm research in order to keep pushing data mining and machine learning applications to be more powerful and, especially for the former, interactive. To bring together researchers and practitioners working in this exciting field, a workshop on parallel data mining was organized as part of PKDD/ECML 2006 (Berlin, Germany). The six contributions selected for the program describe various aspects of data mining and machine learning approaches featuring low to high degrees of parallelism: The first contribution focuses the classic problem of distributed association rule mining and focuses on communication efficiency to improve the state of the art. After this a parallelization technique for speeding up decision tree construction by means of thread-level parallelism for shared memory systems is presented. The next paper discusses the design of a parallel approach for dis- tributed memory systems of the frequent subgraphs mining problem. This approach is based on a hierarchical communication topology to solve issues related to multi-domain computational envi- ronments. The forth paper describes the combined use and the customization of software packages to facilitate a top down parallelism in the tuning of Support Vector Machines (SVM) and the next contribution presents an interesting idea concerning parallel training of Conditional Random Fields (CRFs) and motivates their use in labeling sequential data. The last contribution finally focuses on very efficient feature selection. It describes a parallel algorithm for feature selection from random subsets. Selecting the papers included in this volume would not have been possible without the help of an international Program Committee that has provided detailed reviews for each paper. We would like to also thank Matthew Otey who helped with publicity for the workshop.
Resumo:
The K-Means algorithm for cluster analysis is one of the most influential and popular data mining methods. Its straightforward parallel formulation is well suited for distributed memory systems with reliable interconnection networks, such as massively parallel processors and clusters of workstations. However, in large-scale geographically distributed systems the straightforward parallel algorithm can be rendered useless by a single communication failure or high latency in communication paths. The lack of scalable and fault tolerant global communication and synchronisation methods in large-scale systems has hindered the adoption of the K-Means algorithm for applications in large networked systems such as wireless sensor networks, peer-to-peer systems and mobile ad hoc networks. This work proposes a fully distributed K-Means algorithm (EpidemicK-Means) which does not require global communication and is intrinsically fault tolerant. The proposed distributed K-Means algorithm provides a clustering solution which can approximate the solution of an ideal centralised algorithm over the aggregated data as closely as desired. A comparative performance analysis is carried out against the state of the art sampling methods and shows that the proposed method overcomes the limitations of the sampling-based approaches for skewed clusters distributions. The experimental analysis confirms that the proposed algorithm is very accurate and fault tolerant under unreliable network conditions (message loss and node failures) and is suitable for asynchronous networks of very large and extreme scale.
Resumo:
Interest in third language (L3) acquisition has increased exponentially in recent years, due to its potential to inform long-lasting debates in theoretical linguistics, language acquisition and psycholinguistics. Researchers investigating child and adult L3 acquisition have, from the very beginning, considered the many different cognitive factors that constrain and condition the initial state and development of newly acquired languages, and their models have duly evolved to incorporate insights from the most recent findings in psycholinguistics, neurolinguistics and cognitive psychology. The articles in this Special Issue of Bilingualism: Language and Cognition, in dealing with issues such as age of acquisition, attrition, relearning, cognitive economy or the reliance on different memory systems –to name a few–, provide an accurate portrayal of current inquiry in the field, and are a particularly fine example of how instrumental research in language acquisition and other cognitive domains can be to one another.
Resumo:
We consider two weakly coupled systems and adopt a perturbative approach based on the Ruelle response theory to study their interaction. We propose a systematic way of parameterizing the effect of the coupling as a function of only the variables of a system of interest. Our focus is on describing the impacts of the coupling on the long term statistics rather than on the finite-time behavior. By direct calculation, we find that, at first order, the coupling can be surrogated by adding a deterministic perturbation to the autonomous dynamics of the system of interest. At second order, there are additionally two separate and very different contributions. One is a term taking into account the second-order contributions of the fluctuations in the coupling, which can be parameterized as a stochastic forcing with given spectral properties. The other one is a memory term, coupling the system of interest to its previous history, through the correlations of the second system. If these correlations are known, this effect can be implemented as a perturbation with memory on the single system. In order to treat this case, we present an extension to Ruelle's response theory able to deal with integral operators. We discuss our results in the context of other methods previously proposed for disentangling the dynamics of two coupled systems. We emphasize that our results do not rely on assuming a time scale separation, and, if such a separation exists, can be used equally well to study the statistics of the slow variables and that of the fast variables. By recursively applying the technique proposed here, we can treat the general case of multi-level systems.
Resumo:
Irreversible binding of key flavour disulphides to ovalbumin has been shown previously to occur in model systems. The extent of binding is determined by the availability of the sulphydryl groups to participate in disulphide exchange, influenced either by pH, or the state of the protein (native or heat-denatured). In this study, two further proteins, one with sulphydryl groups available in the native state (beta-lactoglobulin) and one with no sulphydryl groups in the native state (lysozyme) were used to confirm this hypothesis. When the investigation was extended to real food systems, a similar effect was shown when a commercial meat flavouring containing disulphides was added to heat-denatured ovalbumin. Furthermore, comparison of the volatiles generated from onions, cooked either alone, or in the presence of meat, showed a significant reduction of key onion-derived disulphides when cooked in the presence of meat, and an even greater reduction of trisulphides. These findings may have implications for consumer acceptance of food products; where these compounds are used as flavourings or where they occur naturally.
Resumo:
Studies on learning management systems have largely been technical in nature with an emphasis on the evaluation of the human computer interaction (HCI) processes in using the LMS. This paper reports a study that evaluates the information interaction processes on an eLearning course used in teaching an applied Statistics course. The eLearning course is used as a synonym for information systems. The study explores issues of missing context in stored information in information systems. Using the semiotic framework as a guide, the researchers evaluated an existing eLearning course with the view to proposing a model for designing improved eLearning courses for future eLearning programmes. In this exploratory study, a survey questionnaire is used to collect data from 160 participants on an eLearning course in Statistics in Applied Climatology. The views of the participants are analysed with a focus on only the human information interaction issues. Using the semiotic framework as a guide, syntactic, semantic, pragmatic and social context gaps or problems were identified. The information interactions problems identified include ambiguous instructions, inadequate information, lack of sound, interface design problems among others. These problems affected the quality of new knowledge created by the participants. The researchers thus highlighted the challenges of missing information context when data is stored in an information system. The study concludes by proposing a human information interaction model for improving the information interaction quality issues in the design of eLearning course on learning management platforms and those other information systems.
Resumo:
At its most fundamental, cognition as displayed by biological agents (such as humans) may be said to consist of the manipulation and utilisation of memory. Recent discussions in the field of cognitive robotics have emphasised the role of embodiment and the necessity of a value or motivation for autonomous behaviour. This work proposes a computational architecture – the Memory-Based Cognitive (MBC) architecture – based upon these considerations for the autonomous development of control of a simple mobile robot. This novel architecture will permit the exploration of theoretical issues in cognitive robotics and animal cognition. Furthermore, the biological inspiration of the architecture is anticipated to result in a mobile robot controller which displays adaptive behaviour in unknown environments.
Resumo:
Interactions using a standard computer mouse can be particularly difficult for novice and older adult users. Tasks that involve positioning the mouse over a target and double-clicking to initiate some action can be a real challenge for many users. Hence, this paper describes a study that investigates the double-click interactions of older and younger adults and presents data that can help inform the development of methods of assistance. Twelve older adults (mean age = 63.9 years) and 12 younger adults (mean age = 20.8 years) performed click and double-click target selections with a computer mouse. Initial results show that older users make approximately twice as many errors as younger users when attempting double-clicks. For both age groups, the largest proportion of errors was due to difficulties with keeping the cursor steady between button presses. Compared with younger adults, older adults experienced more difficulties with performing two button presses within a required time interval. Understanding these interactions better is a step towards improving accessibility, and may provide some suggestions for future directions of research in this area.
Resumo:
Context-aware multimodal interactive systems aim to adapt to the needs and behavioural patterns of users and offer a way forward for enhancing the efficacy and quality of experience (QoE) in human-computer interaction. The various modalities that constribute to such systems each provide a specific uni-modal response that is integratively presented as a multi-modal interface capable of interpretation of multi-modal user input and appropriately responding to it through dynamically adapted multi-modal interactive flow management , This paper presents an initial background study in the context of the first phase of a PhD research programme in the area of optimisation of data fusion techniques to serve multimodal interactivite systems, their applications and requirements.
Resumo:
The technique of linear responsibility analysis is used for a retrospective case study of a private industrial development consisting of an extension to existing buildings to provide a warehouse, services block and packing line. The organizational structure adopted on the project is analysed using concepts from systems theory which are included in Walker's theoretical model of the structure of building project organizations (Walker, 1981). This model proposes that the process of building provision can be viewed as systems and subsystems which are differentiated from each other at decision points. Further to this, the subsystems can be viewed as the interaction of managing system and operating system. Using Walker's model, a systematic analysis of the relationships between the contributors gives a quantitative assessment of the efficacy of the organizational structure used. The causes of the client's dissatisfaction with the outcome of the project were lack of integration and complexity of the managing system. However, there was a high level of satisfaction with the completed project and this is reflected by the way in which the organization structure corresponded to the model's propositions.