993 resultados para Memory management


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background. This study investigated the performance of patients with idiopathic Parkinson`s disease (PD) without dementia for incidental recognition memory and the effect of encoding strategies on contextual memory. Methods. The authors studied 21 patients with PD (ages 60-85, 12 women; Hoehn and Yahr I-III, Activities of Daily Living 70%-100%) and 22 healthy controls (ages 60-84, 18 women). Participants completed the vocabulary subtest of the Wechsler Adult Intelligence Scale and the Wisconsin Card Sorting Test (WCST). To assess the incidental recognition memory for item (object) and context (location of the object), participants of each group were assigned to 1 of 2 encoding conditions: (a) an incidental associative instruction to bind the object to its location or (b) a nonassociative, nonspecific instruction. Results. PD patients showed performance comparable to the control group`s on the vocabulary subtest and WCST. In contrast to controls, PD patients were unable to take advantage of the associative encoding instruction, which also had a deleterious effect on item recognition. Conclusion. This sample of participants with PD showed diminished item and context recognition memory and an impaired ability to use incidental memory encoding strategy, suggesting a compromised cognitive reserve. The fact that these alterations occurred in early stages of PD, and prior to more general cognitive alterations such as executive dysfunction, should be considered in the management of patients by using specific cognitive rehabilitation interventions.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We are working on the confluence of knowledge management, organizational memory and emergent knowledge with the lens of complex adaptive systems. In order to be fundamentally sustainable organizations search for an adaptive need for managing ambidexterity of day-to-day work and innovation. An organization is an entity of a systemic nature, composed of groups of people who interact to achieve common objectives, making it necessary to capture, store and share interactions knowledge with the organization, this knowledge can be generated in intra-organizational or inter-organizational level. The organizations have organizational memory of knowledge of supported on the Information technology and systems. Each organization, especially in times of uncertainty and radical changes, to meet the demands of the environment, needs timely and sized knowledge on the basis of tacit and explicit. This sizing is a learning process resulting from the interaction that emerges from the relationship between the tacit and explicit knowledge and which we are framing within an approach of Complex Adaptive Systems. The use of complex adaptive systems for building the emerging interdependent relationship, will produce emergent knowledge that will improve the organization unique developing.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The foreseen evolution of chip architectures to higher number of, heterogeneous, cores, with non-uniform memory and non-coherent caches, brings renewed attention to the use of Software Transactional Memory (STM) as an alternative to lock-based synchronisation. However, STM relies on the possibility of aborting conflicting transactions to maintain data consistency, which impacts on the responsiveness and timing guarantees required by real-time systems. In these systems, contention delays must be (efficiently) limited so that the response times of tasks executing transactions are upperbounded and task sets can be feasibly scheduled. In this paper we defend the role of the transaction contention manager to reduce the number of transaction retries and to help the real-time scheduler assuring schedulability. For such purpose, the contention management policy should be aware of on-line scheduling information.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Recent embedded processor architectures containing multiple heterogeneous cores and non-coherent caches renewed attention to the use of Software Transactional Memory (STM) as a building block for developing parallel applications. STM promises to ease concurrent and parallel software development, but relies on the possibility of abort conflicting transactions to maintain data consistency, which in turns affects the execution time of tasks carrying transactions. Because of this fact the timing behaviour of the task set may not be predictable, thus it is crucial to limit the execution time overheads resulting from aborts. In this paper we formalise a FIFO-based algorithm to order the sequence of commits of concurrent transactions. Then, we propose and evaluate two non-preemptive and one SRP-based fully-preemptive scheduling strategies, in order to avoid transaction starvation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The life of humans and most living beings depend on sensation and perception for the best assessment of the surrounding world. Sensorial organs acquire a variety of stimuli that are interpreted and integrated in our brain for immediate use or stored in memory for later recall. Among the reasoning aspects, a person has to decide what to do with available information. Emotions are classifiers of collected information, assigning a personal meaning to objects, events and individuals, making part of our own identity. Emotions play a decisive role in cognitive processes as reasoning, decision and memory by assigning relevance to collected information. The access to pervasive computing devices, empowered by the ability to sense and perceive the world, provides new forms of acquiring and integrating information. But prior to data assessment on its usefulness, systems must capture and ensure that data is properly managed for diverse possible goals. Portable and wearable devices are now able to gather and store information, from the environment and from our body, using cloud based services and Internet connections. Systems limitations in handling sensorial data, compared with our sensorial capabilities constitute an identified problem. Another problem is the lack of interoperability between humans and devices, as they do not properly understand human’s emotional states and human needs. Addressing those problems is a motivation for the present research work. The mission hereby assumed is to include sensorial and physiological data into a Framework that will be able to manage collected data towards human cognitive functions, supported by a new data model. By learning from selected human functional and behavioural models and reasoning over collected data, the Framework aims at providing evaluation on a person’s emotional state, for empowering human centric applications, along with the capability of storing episodic information on a person’s life with physiologic indicators on emotional states to be used by new generation applications.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

It is a difficult task to avoid the “smart systems” topic when discussing smart prevention and, similarly, it is a difficult task to address smart systems without focusing their ability to learn. Following the same line of thought, in the current reality, it seems a Herculean task (or an irreparable omission) to approach the topic of certified occupational health and safety management systems (OHSMS) without discussing the integrated management systems (IMSs). The available data suggest that seldom are the OHSMS operating as the single management system (MS) in a company so, any statement concerning OHSMS should mainly be interpreted from an integrated perspective. A major distinction between generic systems can be drawn between those that learn, i.e., those systems that have “memory” and those that have not. These former systems are often depicted as adaptive since they take into account past events to deal with novel, similar and future events modifying their structure to enable success in its environment. Often, these systems, present a nonlinear behavior and a huge uncertainty related to the forecasting of some events. This paper seeks to portray, for the first time as we were able to find out, the IMSs as complex adaptive systems (CASs) by listing their properties and dissecting the features that enable them to evolve and self-organize in order to, holistically, fulfil the requirements from different stakeholders and thus thrive by assuring the successful sustainability of a company. Based on the revision of literature carried out, this is the first time that IMSs are pointed out as CASs which may develop fruitful synergies both for the MSs and for CASs communities. By performing a thorough revision of literature and based on some concepts embedded in the “DNA” of the subsystems implementation standards it is intended, specifically, to identify, determine and discuss the properties of a generic IMS that should be considered to classify it as a CAS.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The modern computer systems that are in use nowadays are mostly processor-dominant, which means that their memory is treated as a slave element that has one major task – to serve execution units data requirements. This organization is based on the classical Von Neumann's computer model, proposed seven decades ago in the 1950ties. This model suffers from a substantial processor-memory bottleneck, because of the huge disparity between the processor and memory working speeds. In order to solve this problem, in this paper we propose a novel architecture and organization of processors and computers that attempts to provide stronger match between the processing and memory elements in the system. The proposed model utilizes a memory-centric architecture, wherein the execution hardware is added to the memory code blocks, allowing them to perform instructions scheduling and execution, management of data requests and responses, and direct communication with the data memory blocks without using registers. This organization allows concurrent execution of all threads, processes or program segments that fit in the memory at a given time. Therefore, in this paper we describe several possibilities for organizing the proposed memory-centric system with multiple data and logicmemory merged blocks, by utilizing a high-speed interconnection switching network.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Although it is commonly accepted that most macroeconomic variables are nonstationary, it is often difficult to identify the source of the non-stationarity. In particular, it is well-known that integrated and short memory models containing trending components that may display sudden changes in their parameters share some statistical properties that make their identification a hard task. The goal of this paper is to extend the classical testing framework for I(1) versus I(0)+ breaks by considering a a more general class of models under the null hypothesis: non-stationary fractionally integrated (FI) processes. A similar identification problem holds in this broader setting which is shown to be a relevant issue from both a statistical and an economic perspective. The proposed test is developed in the time domain and is very simple to compute. The asymptotic properties of the new technique are derived and it is shown by simulation that it is very well-behaved in finite samples. To illustrate the usefulness of the proposed technique, an application using inflation data is also provided.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper proposes a new time-domain test of a process being I(d), 0 < d = 1, under the null, against the alternative of being I(0) with deterministic components subject to structural breaks at known or unknown dates, with the goal of disentangling the existing identification issue between long-memory and structural breaks. Denoting by AB(t) the different types of structural breaks in the deterministic components of a time series considered by Perron (1989), the test statistic proposed here is based on the t-ratio (or the infimum of a sequence of t-ratios) of the estimated coefficient on yt-1 in an OLS regression of ?dyt on a simple transformation of the above-mentioned deterministic components and yt-1, possibly augmented by a suitable number of lags of ?dyt to account for serial correlation in the error terms. The case where d = 1 coincides with the Perron (1989) or the Zivot and Andrews (1992) approaches if the break date is known or unknown, respectively. The statistic is labelled as the SB-FDF (Structural Break-Fractional Dickey- Fuller) test, since it is based on the same principles as the well-known Dickey-Fuller unit root test. Both its asymptotic behavior and finite sample properties are analyzed, and two empirical applications are provided.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In evaluation of soil quality for agricultural use, soil structure is one of the most important properties, which is influenced not only by climate, biological activity, and management practices but also by mechanical and physico-chemical forces acting in the soil. The purpose of this study was to evaluate the influence of conventional agricultural management on the structure and microstructure of a Latossolo Vermelho distroférrico típico (Rhodic Hapludox) in an experimental area planted to maize. Soil morphology was described using the crop profile method by identifying the distinct structural volumes called Morphologically Homogeneous Units (MHUs). For comparison, we also described a profile in an adjacent area without agricultural use and under natural regrowth referred to as Memory. We took undisturbed samples from the main MHUs so as to form thin sections and blocks of soil for micromorphological and micromorphometrical analyses. Results from the application of the crop profile method showed the occurrence of the following structural types: loose (L), fragmented (F) and continuous (C) in both profiles analyzed. In the Memory soil profile, the fragmented structures were classified as Fptμ∆+tf and Fmt∆μ, whose micromorphology shows an enaulic-porphyric (porous) relative distribution with a great deal of biological activity as indicated by the presence of vughs and channels. Lower down, from 0.20 to 0.35 m, there was a continuous soil volume (sub-type C∆μ), with a subangular block microstructure and an enaulic-porphyric relative distribution, though in this case more compact and with aggregate coalescence and less biological activity. The micromorphometrical study of the soil of the Memory Plot showed the predominance of complex pores in NAM (15.03 %), Fmt∆μ (11.72 %), and Fptμ∆+tf (7.73 %), and rounded pores in C∆μ (8.21 %). In the soil under conventional agricultural management, we observed fragmented structures similar to the Memory Plot from 0.02 to 0.20 m, followed by a volume with a compact continuous structure (C∆μ), without visible porosity and with few roots. In the MHUs under conventional management, reduction in the packing pores (40 %) was observed, mainly in the continuous units (C). The microstructure had well-defined blocks, with the occurrence of planar pores and less evidence of biological activity. In conclusion, the morphological and micromorphological analyses of the soil profiles studied offered complementary information regarding soil structural quality, especially concerning the changes in pore types as result of mechanical stress undergone by the soil.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis presents a novel design paradigm, called Virtual Runtime Application Partitions (VRAP), to judiciously utilize the on-chip resources. As the dark silicon era approaches, where the power considerations will allow only a fraction chip to be powered on, judicious resource management will become a key consideration in future designs. Most of the works on resource management treat only the physical components (i.e. computation, communication, and memory blocks) as resources and manipulate the component to application mapping to optimize various parameters (e.g. energy efficiency). To further enhance the optimization potential, in addition to the physical resources we propose to manipulate abstract resources (i.e. voltage/frequency operating point, the fault-tolerance strength, the degree of parallelism, and the configuration architecture). The proposed framework (i.e. VRAP) encapsulates methods, algorithms, and hardware blocks to provide each application with the abstract resources tailored to its needs. To test the efficacy of this concept, we have developed three distinct self adaptive environments: (i) Private Operating Environment (POE), (ii) Private Reliability Environment (PRE), and (iii) Private Configuration Environment (PCE) that collectively ensure that each application meets its deadlines using minimal platform resources. In this work several novel architectural enhancements, algorithms and policies are presented to realize the virtual runtime application partitions efficiently. Considering the future design trends, we have chosen Coarse Grained Reconfigurable Architectures (CGRAs) and Network on Chips (NoCs) to test the feasibility of our approach. Specifically, we have chosen Dynamically Reconfigurable Resource Array (DRRA) and McNoC as the representative CGRA and NoC platforms. The proposed techniques are compared and evaluated using a variety of quantitative experiments. Synthesis and simulation results demonstrate VRAP significantly enhances the energy and power efficiency compared to state of the art.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

La gestion des ressources, équipements, équipes de travail, et autres, devrait être prise en compte lors de la conception de tout plan réalisable pour le problème de conception de réseaux de services. Cependant, les travaux de recherche portant sur la gestion des ressources et la conception de réseaux de services restent limités. La présente thèse a pour objectif de combler cette lacune en faisant l’examen de problèmes de conception de réseaux de services prenant en compte la gestion des ressources. Pour ce faire, cette thèse se décline en trois études portant sur la conception de réseaux. La première étude considère le problème de capacitated multi-commodity fixed cost network design with design-balance constraints(DBCMND). La structure multi-produits avec capacité sur les arcs du DBCMND, de même que ses contraintes design-balance, font qu’il apparaît comme sous-problème dans de nombreux problèmes reliés à la conception de réseaux de services, d’où l’intérêt d’étudier le DBCMND dans le contexte de cette thèse. Nous proposons une nouvelle approche pour résoudre ce problème combinant la recherche tabou, la recomposition de chemin, et une procédure d’intensification de la recherche dans une région particulière de l’espace de solutions. Dans un premier temps la recherche tabou identifie de bonnes solutions réalisables. Ensuite la recomposition de chemin est utilisée pour augmenter le nombre de solutions réalisables. Les solutions trouvées par ces deux méta-heuristiques permettent d’identifier un sous-ensemble d’arcs qui ont de bonnes chances d’avoir un statut ouvert ou fermé dans une solution optimale. Le statut de ces arcs est alors fixé selon la valeur qui prédomine dans les solutions trouvées préalablement. Enfin, nous utilisons la puissance d’un solveur de programmation mixte en nombres entiers pour intensifier la recherche sur le problème restreint par le statut fixé ouvert/fermé de certains arcs. Les tests montrent que cette approche est capable de trouver de bonnes solutions aux problèmes de grandes tailles dans des temps raisonnables. Cette recherche est publiée dans la revue scientifique Journal of heuristics. La deuxième étude introduit la gestion des ressources au niveau de la conception de réseaux de services en prenant en compte explicitement le nombre fini de véhicules utilisés à chaque terminal pour le transport de produits. Une approche de solution faisant appel au slope-scaling, la génération de colonnes et des heuristiques basées sur une formulation en cycles est ainsi proposée. La génération de colonnes résout une relaxation linéaire du problème de conception de réseaux, générant des colonnes qui sont ensuite utilisées par le slope-scaling. Le slope-scaling résout une approximation linéaire du problème de conception de réseaux, d’où l’utilisation d’une heuristique pour convertir les solutions obtenues par le slope-scaling en solutions réalisables pour le problème original. L’algorithme se termine avec une procédure de perturbation qui améliore les solutions réalisables. Les tests montrent que l’algorithme proposé est capable de trouver de bonnes solutions au problème de conception de réseaux de services avec un nombre fixe des ressources à chaque terminal. Les résultats de cette recherche seront publiés dans la revue scientifique Transportation Science. La troisième étude élargie nos considérations sur la gestion des ressources en prenant en compte l’achat ou la location de nouvelles ressources de même que le repositionnement de ressources existantes. Nous faisons les hypothèses suivantes: une unité de ressource est nécessaire pour faire fonctionner un service, chaque ressource doit retourner à son terminal d’origine, il existe un nombre fixe de ressources à chaque terminal, et la longueur du circuit des ressources est limitée. Nous considérons les alternatives suivantes dans la gestion des ressources: 1) repositionnement de ressources entre les terminaux pour tenir compte des changements de la demande, 2) achat et/ou location de nouvelles ressources et leur distribution à différents terminaux, 3) externalisation de certains services. Nous présentons une formulation intégrée combinant les décisions reliées à la gestion des ressources avec les décisions reliées à la conception des réseaux de services. Nous présentons également une méthode de résolution matheuristique combinant le slope-scaling et la génération de colonnes. Nous discutons des performances de cette méthode de résolution, et nous faisons une analyse de l’impact de différentes décisions de gestion des ressources dans le contexte de la conception de réseaux de services. Cette étude sera présentée au XII International Symposium On Locational Decision, en conjonction avec XXI Meeting of EURO Working Group on Locational Analysis, Naples/Capri (Italy), 2014. En résumé, trois études différentes sont considérées dans la présente thèse. La première porte sur une nouvelle méthode de solution pour le "capacitated multi-commodity fixed cost network design with design-balance constraints". Nous y proposons une matheuristique comprenant la recherche tabou, la recomposition de chemin, et l’optimisation exacte. Dans la deuxième étude, nous présentons un nouveau modèle de conception de réseaux de services prenant en compte un nombre fini de ressources à chaque terminal. Nous y proposons une matheuristique avancée basée sur la formulation en cycles comprenant le slope-scaling, la génération de colonnes, des heuristiques et l’optimisation exacte. Enfin, nous étudions l’allocation des ressources dans la conception de réseaux de services en introduisant des formulations qui modèlent le repositionnement, l’acquisition et la location de ressources, et l’externalisation de certains services. À cet égard, un cadre de solution slope-scaling développé à partir d’une formulation en cycles est proposé. Ce dernier comporte la génération de colonnes et une heuristique. Les méthodes proposées dans ces trois études ont montré leur capacité à trouver de bonnes solutions.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The proliferation of wireless sensor networks in a large spectrum of applications had been spurered by the rapid advances in MEMS(micro-electro mechanical systems )based sensor technology coupled with low power,Low cost digital signal processors and radio frequency circuits.A sensor network is composed of thousands of low cost and portable devices bearing large sensing computing and wireless communication capabilities. This large collection of tiny sensors can form a robust data computing and communication distributed system for automated information gathering and distributed sensing.The main attractive feature is that such a sensor network can be deployed in remote areas.Since the sensor node is battery powered,all the sensor nodes should collaborate together to form a fault tolerant network so as toprovide an efficient utilization of precious network resources like wireless channel,memory and battery capacity.The most crucial constraint is the energy consumption which has become the prime challenge for the design of long lived sensor nodes.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Augmented Lagrangian methods for large-scale optimization usually require efficient algorithms for minimization with box constraints. On the other hand, active-set box-constraint methods employ unconstrained optimization algorithms for minimization inside the faces of the box. Several approaches may be employed for computing internal search directions in the large-scale case. In this paper a minimal-memory quasi-Newton approach with secant preconditioners is proposed, taking into account the structure of Augmented Lagrangians that come from the popular Powell-Hestenes-Rockafellar scheme. A combined algorithm, that uses the quasi-Newton formula or a truncated-Newton procedure, depending on the presence of active constraints in the penalty-Lagrangian function, is also suggested. Numerical experiments using the Cute collection are presented.