793 resultados para hierarchical memory


Relevância:

70.00% 70.00%

Publicador:

Resumo:

The miniaturization race in the hardware industry aiming at continuous increasing of transistor density on a die does not bring respective application performance improvements any more. One of the most promising alternatives is to exploit a heterogeneous nature of common applications in hardware. Supported by reconfigurable computation, which has already proved its efficiency in accelerating data intensive applications, this concept promises a breakthrough in contemporary technology development. Memory organization in such heterogeneous reconfigurable architectures becomes very critical. Two primary aspects introduce a sophisticated trade-off. On the one hand, a memory subsystem should provide well organized distributed data structure and guarantee the required data bandwidth. On the other hand, it should hide the heterogeneous hardware structure from the end-user, in order to support feasible high-level programmability of the system. This thesis work explores the heterogeneous reconfigurable hardware architectures and presents possible solutions to cope the problem of memory organization and data structure. By the example of the MORPHEUS heterogeneous platform, the discussion follows the complete design cycle, starting from decision making and justification, until hardware realization. Particular emphasis is made on the methods to support high system performance, meet application requirements, and provide a user-friendly programmer interface. As a result, the research introduces a complete heterogeneous platform enhanced with a hierarchical memory organization, which copes with its task by means of separating computation from communication, providing reconfigurable engines with computation and configuration data, and unification of heterogeneous computational devices using local storage buffers. It is distinguished from the related solutions by distributed data-flow organization, specifically engineered mechanisms to operate with data on local domains, particular communication infrastructure based on Network-on-Chip, and thorough methods to prevent computation and communication stalls. In addition, a novel advanced technique to accelerate memory access was developed and implemented.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The performance of memory-guided saccades with two different delays (3 and 30 s of memorization) was studied in seven healthy subjects. Double-pulse transcranial magnetic stimulation (dTMS) with an interstimulus interval of 100 ms was applied over the right dorsolateral prefrontal cortex (DLPFC) early (1 s after target presentation) and late (28 s after target presentation). Early stimulation significantly increased in both delays the percentage of error in amplitude (PEA) of contralateral memory-guided saccades compared to the control experiment without stimulation. dTMS applied late in the delay had no significant effect on PEA. Furthermore, we found a significantly smaller effect of early stimulation in the long-delay paradigm. These results suggest a time-dependent hierarchical organization of the spatial working memory with a functional dominance of DLPFC during the early memorization, independent from the memorization delay. For a long memorization delay, however, working memory seems to have an additional, DLPFC-independent component.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Results of numerical experiments are introduced. Experiments were carried out by means of computer simulation on olfactory bulb for the purpose of checking of thinking mechanisms conceptual model, introduced in [2]. Key role of quasisymbol neurons in processes of pattern identification, existence of mental view, functions of cyclic connections between symbol and quasisymbol neurons as short-term memory, important role of synaptic plasticity in learning processes are confirmed numerically. Correctness of fundamental ideas put in base of conceptual model is confirmed on olfactory bulb at quantitative level.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Wireless Sensor Networks (WSNs) are highly distributed systems in which resource allocation (bandwidth, memory) must be performed efficiently to provide a minimum acceptable Quality of Service (QoS) to the regions where critical events occur. In fact, if resources are statically assigned independently from the location and instant of the events, these resources will definitely be misused. In other words, it is more efficient to dynamically grant more resources to sensor nodes affected by critical events, thus providing better network resource management and reducing endto- end delays of event notification and tracking. In this paper, we discuss the use of a WSN management architecture based on the active network management paradigm to provide the real-time tracking and reporting of dynamic events while ensuring efficient resource utilization. The active network management paradigm allows packets to transport not only data, but also program scripts that will be executed in the nodes to dynamically modify the operation of the network. This presumes the use of a runtime execution environment (middleware) in each node to interpret the script. We consider hierarchical (e.g. cluster-tree, two-tiered architecture) WSN topologies since they have been used to improve the timing performance of WSNs as they support deterministic medium access control protocols.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Dissertation presented to obtain the Ph.D degree in Biology, Neuroscience

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A parts based model is a parametrization of an object class using a collection of landmarks following the object structure. The matching of parts based models is one of the problems where pairwise Conditional Random Fields have been successfully applied. The main reason of their effectiveness is tractable inference and learning due to the simplicity of involved graphs, usually trees. However, these models do not consider possible patterns of statistics among sets of landmarks, and thus they sufffer from using too myopic information. To overcome this limitation, we propoese a novel structure based on a hierarchical Conditional Random Fields, which we explain in the first part of this memory. We build a hierarchy of combinations of landmarks, where matching is performed taking into account the whole hierarchy. To preserve tractable inference we effectively sample the label set. We test our method on facial feature selection and human pose estimation on two challenging datasets: Buffy and MultiPIE. In the second part of this memory, we present a novel approach to multiple kernel combination that relies on stacked classification. This method can be used to evaluate the landmarks of the parts-based model approach. Our method is based on combining responses of a set of independent classifiers for each individual kernel. Unlike earlier approaches that linearly combine kernel responses, our approach uses them as inputs to another set of classifiers. We will show that we outperform state-of-the-art methods on most of the standard benchmark datasets.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

As the number of processors in distributed-memory multiprocessors grows, efficiently supporting a shared-memory programming model becomes difficult. We have designed the Protocol for Hierarchical Directories (PHD) to allow shared-memory support for systems containing massive numbers of processors. PHD eliminates bandwidth problems by using a scalable network, decreases hot-spots by not relying on a single point to distribute blocks, and uses a scalable amount of space for its directories. PHD provides a shared-memory model by synthesizing a global shared memory from the local memories of processors. PHD supports sequentially consistent read, write, and test- and-set operations. This thesis also introduces a method of describing locality for hierarchical protocols and employs this method in the derivation of an abstract model of the protocol behavior. An embedded model, based on the work of Johnson[ISCA19], describes the protocol behavior when mapped to a k-ary n-cube. The thesis uses these two models to study the average height in the hierarchy that operations reach, the longest path messages travel, the number of messages that operations generate, the inter-transaction issue time, and the protocol overhead for different locality parameters, degrees of multithreading, and machine sizes. We determine that multithreading is only useful for approximately two to four threads; any additional interleaving does not decrease the overall latency. For small machines and high locality applications, this limitation is due mainly to the length of the running threads. For large machines with medium to low locality, this limitation is due mainly to the protocol overhead being too large. Our study using the embedded model shows that in situations where the run length between references to shared memory is at least an order of magnitude longer than the time to process a single state transition in the protocol, applications exhibit good performance. If separate controllers for processing protocol requests are included, the protocol scales to 32k processor machines as long as the application exhibits hierarchical locality: at least 22% of the global references must be able to be satisfied locally; at most 35% of the global references are allowed to reach the top level of the hierarchy.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

An emerging consensus in cognitive science views the biological brain as a hierarchically-organized predictive processing system. This is a system in which higher-order regions are continuously attempting to predict the activity of lower-order regions at a variety of (increasingly abstract) spatial and temporal scales. The brain is thus revealed as a hierarchical prediction machine that is constantly engaged in the effort to predict the flow of information originating from the sensory surfaces. Such a view seems to afford a great deal of explanatory leverage when it comes to a broad swathe of seemingly disparate psychological phenomena (e.g., learning, memory, perception, action, emotion, planning, reason, imagination, and conscious experience). In the most positive case, the predictive processing story seems to provide our first glimpse at what a unified (computationally-tractable and neurobiological plausible) account of human psychology might look like. This obviously marks out one reason why such models should be the focus of current empirical and theoretical attention. Another reason, however, is rooted in the potential of such models to advance the current state-of-the-art in machine intelligence and machine learning. Interestingly, the vision of the brain as a hierarchical prediction machine is one that establishes contact with work that goes under the heading of 'deep learning'. Deep learning systems thus often attempt to make use of predictive processing schemes and (increasingly abstract) generative models as a means of supporting the analysis of large data sets. But are such computational systems sufficient (by themselves) to provide a route to general human-level analytic capabilities? I will argue that they are not and that closer attention to a broader range of forces and factors (many of which are not confined to the neural realm) may be required to understand what it is that gives human cognition its distinctive (and largely unique) flavour. The vision that emerges is one of 'homomimetic deep learning systems', systems that situate a hierarchically-organized predictive processing core within a larger nexus of developmental, behavioural, symbolic, technological and social influences. Relative to that vision, I suggest that we should see the Web as a form of 'cognitive ecology', one that is as much involved with the transformation of machine intelligence as it is with the progressive reshaping of our own cognitive capabilities.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

One common problem in all basic techniques of knowledge representation is the handling of the trade-off between precision of inferences and resource constraints, such as time and memory. Michalski and Winston (1986) suggested the Censored Production Rule (CPR) as an underlying representation and computational mechanism to enable logic based systems to exhibit variable precision in which certainty varies while specificity stays constant. As an extension of CPR, the Hierarchical Censored Production Rules (HCPRs) system of knowledge representation, proposed by Bharadwaj & Jain (1992), exhibits both variable certainty as well as variable specificity and offers mechanisms for handling the trade-off between the two. An HCPR has the form: Decision If(preconditions) Unless(censor) Generality(general_information) Specificity(specific_information). As an attempt towards evolving a generalized knowledge representation, an Extended Hierarchical Censored Production Rules (EHCPRs) system is suggested in this paper. With the inclusion of new operators, an Extended Hierarchical Censored Production Rule (EHCPR) takes the general form: Concept If (Preconditions) Unless (Exceptions) Generality (General-Concept) Specificity (Specific Concepts) Has_part (default: structural-parts) Has_property (default:characteristic-properties) Has_instance (instances). How semantic networks and frames are represented in terms of an EHCPRs is shown. Multiple inheritance, inheritance with and without cancellation, recognition with partial match, and a few default logic problems are shown to be tackled efficiently in the proposed system.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In recent years, Deep Learning techniques have shown to perform well on a large variety of problems both in Computer Vision and Natural Language Processing, reaching and often surpassing the state of the art on many tasks. The rise of deep learning is also revolutionizing the entire field of Machine Learning and Pattern Recognition pushing forward the concepts of automatic feature extraction and unsupervised learning in general. However, despite the strong success both in science and business, deep learning has its own limitations. It is often questioned if such techniques are only some kind of brute-force statistical approaches and if they can only work in the context of High Performance Computing with tons of data. Another important question is whether they are really biologically inspired, as claimed in certain cases, and if they can scale well in terms of "intelligence". The dissertation is focused on trying to answer these key questions in the context of Computer Vision and, in particular, Object Recognition, a task that has been heavily revolutionized by recent advances in the field. Practically speaking, these answers are based on an exhaustive comparison between two, very different, deep learning techniques on the aforementioned task: Convolutional Neural Network (CNN) and Hierarchical Temporal memory (HTM). They stand for two different approaches and points of view within the big hat of deep learning and are the best choices to understand and point out strengths and weaknesses of each of them. CNN is considered one of the most classic and powerful supervised methods used today in machine learning and pattern recognition, especially in object recognition. CNNs are well received and accepted by the scientific community and are already deployed in large corporation like Google and Facebook for solving face recognition and image auto-tagging problems. HTM, on the other hand, is known as a new emerging paradigm and a new meanly-unsupervised method, that is more biologically inspired. It tries to gain more insights from the computational neuroscience community in order to incorporate concepts like time, context and attention during the learning process which are typical of the human brain. In the end, the thesis is supposed to prove that in certain cases, with a lower quantity of data, HTM can outperform CNN.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Functional neuroimaging techniques enable investigations into the neural basis of human cognition, emotions, and behaviors. In practice, applications of functional magnetic resonance imaging (fMRI) have provided novel insights into the neuropathophysiology of major psychiatric,neurological, and substance abuse disorders, as well as into the neural responses to their treatments. Modern activation studies often compare localized task-induced changes in brain activity between experimental groups. One may also extend voxel-level analyses by simultaneously considering the ensemble of voxels constituting an anatomically defined region of interest (ROI) or by considering means or quantiles of the ROI. In this work we present a Bayesian extension of voxel-level analyses that offers several notable benefits. First, it combines whole-brain voxel-by-voxel modeling and ROI analyses within a unified framework. Secondly, an unstructured variance/covariance for regional mean parameters allows for the study of inter-regional functional connectivity, provided enough subjects are available to allow for accurate estimation. Finally, an exchangeable correlation structure within regions allows for the consideration of intra-regional functional connectivity. We perform estimation for our model using Markov Chain Monte Carlo (MCMC) techniques implemented via Gibbs sampling which, despite the high throughput nature of the data, can be executed quickly (less than 30 minutes). We apply our Bayesian hierarchical model to two novel fMRI data sets: one considering inhibitory control in cocaine-dependent men and the second considering verbal memory in subjects at high risk for Alzheimer’s disease. The unifying hierarchical model presented in this manuscript is shown to enhance the interpretation content of these data sets.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The present study investigated how ease of imagery influences source monitoring accuracy. Two experiments were conducted in order to examine how ease of imagery influences the probability of source confusions of perceived and imagined completions of natural symmetric shapes. The stimuli consisted of binary pictures of natural objects, namely symmetric pictures of birds, butterflies, insects, and leaves. The ease of imagery (indicating the similarity of the sources) and the discriminability (indicating the similarity of the items) of each stimulus were estimated in a pretest and included as predictors of the memory performance for these stimuli. It was found that confusion of the sources becomes more likely when the imagery process was relatively easy. However, if the different processes of source monitoring-item memory, source memory and guessing biases-are disentangled, both experiments support the assumption that the effect of decreased source memory for easily imagined stimuli is due to decision processes and misinformation at retrieval rather than encoding processes and memory retention. The data were modeled with a Bayesian hierarchical implementation of the one high threshold source monitoring model.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Usually, generalization is considered as a function of learning from a set of examples. In present work on the basis of recent neural network assembly memory model (NNAMM), a biologically plausible 'grandmother' model for vision, where each separate memory unit itself can generalize, has been proposed. For such a generalization by computation through memory, analytical formulae and numerical procedure are found to calculate exactly the perfectly learned memory unit's generalization ability. The model's memory has complex hierarchical structure, can be learned from one example by a one-step process, and may be considered as a semi-representational one. A simple binary neural network for bell-shaped tuning is described.