998 resultados para Memory overhead
Resumo:
Pesticide exposure during brain development could represent an important risk factor for the onset of neurodegenerative diseases. Previous studies investigated the effect of permethrin (PERM) administered at 34 mg/kg, a dose close to the no observable adverse effect level (NOAEL) from post natal day (PND) 6 to PND 21 in rats. Despite the PERM dose did not elicited overt signs of toxicity (i.e. normal body weight gain curve), it was able to induce striatal neurodegeneration (dopamine and Nurr1 reduction, and lipid peroxidation increase). The present study was designed to characterize the cognitive deficits in the current animal model. When during late adulthood PERM treated rats were tested for spatial working memory performances in a T-maze-rewarded alternation task they took longer to choose for the correct arm in comparison to age matched controls. No differences between groups were found in anxiety-like state, locomotor activity, feeding behavior and spatial orientation task. Our findings showing a selective effect of PERM treatment on the T-maze task point to an involvement of frontal cortico-striatal circuitry rather than to a role for the hippocampus. The predominant disturbances concern the dopamine (DA) depletion in the striatum and, the serotonin (5-HT) and noradrenaline (NE) unbalance together with a hypometabolic state in the medial prefrontal cortex area. In the hippocampus, an increase of NE and a decrease of DA were observed in PERM treated rats as compared to controls. The concentration of the most representative marker for pyrethroid exposure (3-phenoxybenzoic acid) measured in the urine of rodents 12 h after the last treatment was 41.50 µ/L and it was completely eliminated after 96 h.
Resumo:
The recent trends of chip architectures with higher number of heterogeneous cores, and non-uniform memory/non-coherent caches, brings renewed attention to the use of Software Transactional Memory (STM) as a fundamental building block for developing parallel applications. Nevertheless, although STM promises to ease concurrent and parallel software development, it relies on the possibility of aborting conflicting transactions to maintain data consistency, which impacts on the responsiveness and timing guarantees required by embedded real-time systems. In these systems, contention delays must be (efficiently) limited so that the response times of tasks executing transactions are upper-bounded and task sets can be feasibly scheduled. In this paper we assess the use of STM in the development of embedded real-time software, defending that the amount of contention can be reduced if read-only transactions access recent consistent data snapshots, progressing in a wait-free manner. We show how the required number of versions of a shared object can be calculated for a set of tasks. We also outline an algorithm to manage conflicts between update transactions that prevents starvation.
Resumo:
The usage of COTS-based multicores is becoming widespread in the field of embedded systems. Providing realtime guarantees at design-time is a pre-requisite to deploy real-time systems on these multicores. This necessitates the consideration of the impact of the contention due to shared low-level hardware resources on the Worst-Case Execution Time (WCET) of the tasks. As a step towards this aim, this paper first identifies the different factors that make the WCET analysis a challenging problem in a typical COTS-based multicore system. Then, we propose and prove, a mathematically correct method to determine tight upper bounds on the WCET of the tasks, when they are co-scheduled on different cores.
Resumo:
The current industry trend is towards using Commercially available Off-The-Shelf (COTS) based multicores for developing real time embedded systems, as opposed to the usage of custom-made hardware. In typical implementation of such COTS-based multicores, multiple cores access the main memory via a shared bus. This often leads to contention on this shared channel, which results in an increase of the response time of the tasks. Analyzing this increased response time, considering the contention on the shared bus, is challenging on COTS-based systems mainly because bus arbitration protocols are often undocumented and the exact instants at which the shared bus is accessed by tasks are not explicitly controlled by the operating system scheduler; they are instead a result of cache misses. This paper makes three contributions towards analyzing tasks scheduled on COTS-based multicores. Firstly, we describe a method to model the memory access patterns of a task. Secondly, we apply this model to analyze the worst case response time for a set of tasks. Although the required parameters to obtain the request profile can be obtained by static analysis, we provide an alternative method to experimentally obtain them by using performance monitoring counters (PMCs). We also compare our work against an existing approach and show that our approach outperforms it by providing tighter upper-bound on the number of bus requests generated by a task.
Resumo:
Contention on the memory bus in COTS based multicore systems is becoming a major determining factor of the execution time of a task. Analyzing this extra execution time is non-trivial because (i) bus arbitration protocols in such systems are often undocumented and (ii) the times when the memory bus is requested to be used are not explicitly controlled by the operating system scheduler; they are instead a result of cache misses. We present a method for finding an upper bound on the extra execution time of a task due to contention on the memory bus in COTS based multicore systems. This method makes no assumptions on the bus arbitration protocol (other than assuming that it is work-conserving).
Resumo:
Fault injection is frequently used for the verification and validation of dependable systems. When targeting real time microprocessor based systems the process becomes significantly more complex. This paper proposes two complementary solutions to improve real time fault injection campaign execution, both in terms of performance and capabilities. The methodology is based on the use of the on-chip debug mechanisms present in modern electronic devices. The main objective is the injection of faults in microprocessor memory elements with minimum delay and intrusiveness. Different configurations were implemented and compared in terms of performance gain and logic overhead.
Resumo:
The rapid increase in the use of microprocessor-based systems in critical areas, where failures imply risks to human lives, to the environment or to expensive equipment, significantly increased the need for dependable systems, able to detect, tolerate and eventually correct faults. The verification and validation of such systems is frequently performed via fault injection, using various forms and techniques. However, as electronic devices get smaller and more complex, controllability and observability issues, and sometimes real time constraints, make it harder to apply most conventional fault injection techniques. This paper proposes a fault injection environment and a scalable methodology to assist the execution of real-time fault injection campaigns, providing enhanced performance and capabilities. Our proposed solutions are based on the use of common and customized on-chip debug (OCD) mechanisms, present in many modern electronic devices, with the main objective of enabling the insertion of faults in microprocessor memory elements with minimum delay and intrusiveness. Different configurations were implemented starting from basic Components Off-The-Shelf (COTS) microprocessors, equipped with real-time OCD infrastructures, to improved solutions based on modified interfaces, and dedicated OCD circuitry that enhance fault injection capabilities and performance. All methodologies and configurations were evaluated and compared concerning performance gain and silicon overhead.
Resumo:
Hard real- time multiprocessor scheduling has seen, in recent years, the flourishing of semi-partitioned scheduling algorithms. This category of scheduling schemes combines elements of partitioned and global scheduling for the purposes of achieving efficient utilization of the system’s processing resources with strong schedulability guarantees and with low dispatching overheads. The sub-class of slot-based “task-splitting” scheduling algorithms, in particular, offers very good trade-offs between schedulability guarantees (in the form of high utilization bounds) and the number of preemptions/migrations involved. However, so far there did not exist unified scheduling theory for such algorithms; each one was formulated in its own accompanying analysis. This article changes this fragmented landscape by formulating a more unified schedulability theory covering the two state-of-the-art slot-based semi-partitioned algorithms, S-EKG and NPS-F (both fixed job-priority based). This new theory is based on exact schedulability tests, thus also overcoming many sources of pessimism in existing analysis. In turn, since schedulability testing guides the task assignment under the schemes in consideration, we also formulate an improved task assignment procedure. As the other main contribution of this article, and as a response to the fact that many unrealistic assumptions, present in the original theory, tend to undermine the theoretical potential of such scheduling schemes, we identified and modelled into the new analysis all overheads incurred by the algorithms in consideration. The outcome is a new overhead-aware schedulability analysis that permits increased efficiency and reliability. The merits of this new theory are evaluated by an extensive set of experiments.
Resumo:
The last decade has witnessed a major shift towards the deployment of embedded applications on multi-core platforms. However, real-time applications have not been able to fully benefit from this transition, as the computational gains offered by multi-cores are often offset by performance degradation due to shared resources, such as main memory. To efficiently use multi-core platforms for real-time systems, it is hence essential to tightly bound the interference when accessing shared resources. Although there has been much recent work in this area, a remaining key problem is to address the diversity of memory arbiters in the analysis to make it applicable to a wide range of systems. This work handles diverse arbiters by proposing a general framework to compute the maximum interference caused by the shared memory bus and its impact on the execution time of the tasks running on the cores, considering different bus arbiters. Our novel approach clearly demarcates the arbiter-dependent and independent stages in the analysis of these upper bounds. The arbiter-dependent phase takes the arbiter and the task memory-traffic pattern as inputs and produces a model of the availability of the bus to a given task. Then, based on the availability of the bus, the arbiter-independent phase determines the worst-case request-release scenario that maximizes the interference experienced by the tasks due to the contention for the bus. We show that the framework addresses the diversity problem by applying it to a memory bus shared by a fixed-priority arbiter, a time-division multiplexing (TDM) arbiter, and an unspecified work-conserving arbiter using applications from the MediaBench test suite. We also experimentally evaluate the quality of the analysis by comparison with a state-of-the-art TDM analysis approach and consistently showing a considerable reduction in maximum interference.
Resumo:
This paper analyzes several natural and man-made complex phenomena in the perspective of dynamical systems. Such phenomena are often characterized by the absence of a characteristic length-scale, long range correlations and persistent memory, which are features also associated to fractional order systems. For each system, the output, interpreted as a manifestation of the system dynamics, is analyzed by means of the Fourier transform. The amplitude spectrum is approximated by a power law function and the parameters are interpreted as an underlying signature of the system dynamics. The complex systems under analysis are then compared in a global perspective in order to unveil and visualize hidden relationships among them.
Resumo:
The aim of this paper is to address some theoretical issues concerning the narrative practice in cyberspace. From a narratological perspective it intends to clarify the functioning of time and space in storytelling. For that purpose it traces the concept(s) of memory inherited from rhetoric; the use of memory as a narrative device in traditional accounts; the adaptations imposed by hyperfiction. Using practical examples (including two Portuguese case studies - InStory 2006, and Noon 2007) it will show how narrative memory strategies can be helpful in game literacy. The main purpose is to contribute to serious game research and (trans)literary studies.
Resumo:
This paper studies the drivers of heuristic application in different decision types. The study compares differences in frequencies of heuristic classes' such as recognition, one-reason choice and trade-off applied in, respectively, memory-based and stimulus-based choices as well as in high and low involvement decisions. The study has been conducted online among 205 participants from 28 countries.
Resumo:
ABSTRACT: Background. In India, prevalence rates of dementia and prodromal amnestic Mild Cognitive Impairment (MCI) are 3.1% and 4.3% respectively. Most Indians refer to the full spectrum of cognitive disorders simply as ‘memory loss.’ Barring prevention or cure, these conditions will rise rapidly with population aging. Evidence-based policies and practices can improve the lives of affected individuals and their caregivers, but will require timely and sustained uptake. Objectives. Framed by social cognitive theories of health behavior, this study explores the knowledge, attitudes and practices concerning cognitive impairment and related service use by older adults who screen positive for MCI, their primary caregivers, and health providers. Methods. I used the Montreal Cognitive Assessment to screen for cognitive impairment in memory camps in Mumbai. To achieve sampling diversity, I used maximum variation sampling. Ten adults aged 60+ who had no significant functional impairment but screened positive for MCI and their caregivers participated in separate focus groups. Four other such dyads and six doctors/ traditional healers completed in-depth interviews. Data were translated from Hindi or Marathi to English and analyzed in Atlas.ti using Framework Analysis. Findings. Knowledge and awareness of cognitive impairment and available resources were very low. Physicians attributed the condition to disease-induced pathology while lay persons blamed brain malfunction due to normal aging. Main attitudes were that this condition is not a disease, is not serious and/or is not treatable, and that it evokes stigma toward and among impaired persons, their families and providers. Low knowledge and poor attitudes impeded help-seeking. Conclusions. Cognitive disorders of aging will take a heavy toll on private lives and public resources in developing countries. Early detection, accurate diagnosis, systematic monitoring and quality care are needed to compress the period of morbidity and promote quality of life. Key stakeholders provide essential insights into how scientific and indigenous knowledge and sociocultural attitudes affect use and provision of resources.
Resumo:
In this study in the field of Consumer Behavior, brand name memory of consumers with regard to verbal and visual incongruent and congruent information such as memory structure of brands was tested. Hence, four experimental groups with different constellations of verbal and visual congruity and incongruity were created to compare their brand name memory performance. The experiment was conducted in several classes with 128 students, each group with 32 participants. It was found that brands, which are presented in a congruent or moderately incongruent relation to their brand schema, result in a better brand recall than their incongruent counterparts. A difference between visual congruity and moderately incongruity could not be confirmed. In contrast to visual incongruent information, verbal incongruent information does not result in a worse brand recall performance.
Resumo:
T cell factor-1 (TCF-1) and lymphoid enhancer-binding factor 1, the effector transcription factors of the canonical Wnt pathway, are known to be critical for normal thymocyte development. However, it is largely unknown if it has a role in regulating mature T cell activation and T cell-mediated immune responses. In this study, we demonstrate that, like IL-7Ralpha and CD62L, TCF-1 and lymphoid enhancer-binding factor 1 exhibit dynamic expression changes during T cell responses, being highly expressed in naive T cells, downregulated in effector T cells, and upregulated again in memory T cells. Enforced expression of a p45 TCF-1 isoform limited the expansion of Ag-specific CD8 T cells in response to Listeria monocytogenes infection. However, when the p45 transgene was coupled with ectopic expression of stabilized beta-catenin, more Ag-specific memory CD8 T cells were generated, with enhanced ability to produce IL-2. Moreover, these memory CD8 T cells expanded to a larger number of secondary effectors and cleared bacteria faster when the immunized mice were rechallenged with virulent L. monocytogenes. Furthermore, in response to vaccinia virus or lymphocytic choriomeningitis virus infection, more Ag-specific memory CD8 T cells were generated in the presence of p45 and stabilized beta-catenin transgenes. Although activated Wnt signaling also resulted in larger numbers of Ag-specific memory CD4 T cells, their functional attributes and expansion after the secondary infection were not improved. Thus, constitutive activation of the canonical Wnt pathway favors memory CD8 T cell formation during initial immunization, resulting in enhanced immunity upon second encounter with the same pathogen.