971 resultados para hemispheric specialization


Relevância:

20.00% 20.00%

Publicador:

Resumo:

BACKGROUND AND PURPOSE Acute stroke patients with severely impaired oral intake are at risk of malnutrition and dehydration. Rapid identification of these patients is necessary to establish early enteral tube feeding. Whether specific lesion location predicts early tube dependency was analysed, and the neural correlates of impaired oral intake after hemispheric ischaemic stroke were assessed. METHODS Tube dependency and functional oral intake were evaluated with a standardized comprehensive swallowing assessment within the first 48 h after magnetic resonance imaging proven first-time acute supratentorial ischaemic stroke. Voxel-based lesion symptom mapping (VLSM) was performed to compare lesion location between tube-dependent patients versus patients without tube feeding and impaired versus unimpaired oral intake. RESULTS Out of 119 included patients 43 (36%) had impaired oral intake and 12 (10%) were tube dependent. Both tube dependency and impaired oral intake were significantly associated with a higher National Institutes of Health Stroke Scale score and larger infarct volume and these patients had worse clinical outcome at discharge. Clinical characteristics did not differ between left and right hemispheric strokes. In the VLSM analysis, mildly impaired oral intake correlated with lesions of the Rolandic operculum, the insular cortex, the superior corona radiata and to a lesser extent of the putamen, the external capsule and the superior longitudinal fascicle. Tube dependency was significantly associated with affection of the anterior insular cortex. CONCLUSIONS Mild impairment of oral intake correlates with damage to a widespread operculo-insular swallowing network. However, specific lesions of the anterior insula lead to severe impairment and tube dependency and clinicians might consider early enteral tube feeding in these patients.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

An impairment of the spatial deployment of visual attention during exploration of static (i.e., motionless) stimuli is a common finding after an acute, right-hemispheric stroke. However, less is known about how these deficits: a) are modulated through naturalistic motion (i.e., without directional, specific spatial features); and, b) evolve in the subacute/chronic post-stroke phase. In the present study, we investigated free visual exploration in three patient groups with subacute/chronic right-hemispheric stroke and in healthy subjects. The first group included patients with left visual neglect and a left visual field defect (VFD), the second patients with a left VFD but no neglect, and the third patients without neglect or VFD. Eye movements were measured in all participants while they freely explored a traffic scene without (static condition) and with (dynamic condition) naturalistic motion, i.e., cars moving from the right or left. In the static condition, all patient groups showed similar deployment of visual exploration (i.e., as measured by the cumulative fixation duration) as compared to healthy subjects, suggesting that recovery processes took place, with normal spatial allocation of attention. However, the more demanding dynamic condition with moving cars elicited different re-distribution patterns of visual attention, quite similar to those typically observed in acute stroke. Neglect patients with VFD showed a significant decrease of visual exploration in the contralesional space, whereas patients with VFD but no neglect showed a significant increase of visual exploration in the contralesional space. No differences, as compared to healthy subjects, were found in patients without neglect or VFD. These results suggest that naturalistic motion, without directional, specific spatial features, may critically influence the spatial distribution of visual attention in subacute/chronic stroke patients.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

When the Shakers established communal farms in the Ohio Valley, they encountered a new agricultural environment that was substantially different from the familiar soils, climates, and markets of New England and the Hudson Valley. The ways in which their response to these new conditions differed by region has not been well documented. We examine patterns of specialization among the Shakers using the manuscript schedules of the federal Agricultural Censuses from 1850 through 1880. For each Shaker unit, we also recorded a random sample of five farms in the same township (or all available farms if there were fewer than five). The sample of neighboring farms included 75 in 1850, 70 in the next two census years, and 66 in 1880. A Herfindahl-type index suggested that, although the level of specialization was less among the Shakers than their neighbors, trends in specialization by the Shakers and their neighbors were remarkably similar when considered by region. Both Eastern and Western Shakers were more heavily committed to dairy and produce than were their neighbors, while Western Shakers produced more grains than did Eastern Shakers, a pattern imitated in nearby family farms. Livestock and related production was far more important to the Eastern Shakers than to the Western Shakers, again similar to patterns in the census returns from other farms. We conclude that, despite the obvious scale and organizational differences, Shaker production decisions were based on the same comparative advantages that determined production decisions of family farms.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

One of the most abrupt and yet unexplained past rises in atmospheric CO2 (10 p.p.m.v. in two centuries) occurred in quasi-synchrony with abrupt northern hemispheric warming into the Bølling/Allerød, 14,600 years ago. Here we use a U/Th-dated record of atmospheric D14C from Tahiti corals to provide an independent and precise age control for this CO2 rise. We also use model simulations to show that the release of old (nearly 14C-free) carbon can explain these changes in CO2 and D14C. The D14C record provides an independent constraint on the amount of carbon released (125 Pg C). We suggest, in line with observations of atmospheric CH4 and terrigenous biomarkers, that thawing permafrost in high northern latitudes could have been the source of carbon, possibly with contribution from flooding of the Siberian continental shelf during meltwater pulse 1A. Our findings highlight the potential of the permafrost carbon reservoir to modulate abrupt climate changes via greenhouse-gas feedbacks.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Measures have been developed to understand tendencies in the distribution of economic activity. The merits of these measures are in the convenience of data collection and processing. In this interim report, investigating the property of such measures to determine the geographical spread of economic activities, we summarize the merits and limitations of measures, and make clear that we must apply caution in their usage. As a first trial to access areal data, this project focus on administrative areas, not on point data and input-output data. Firm level data is not within the scope of this article. The rest of this article is organized as follows. In Section 2, we touch on the the limitations and problems associated with the measures and areal data. Specific measures are introduced in Section 3, and applied in Section 4. The conclusion summarizes the findings and discusses future work.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Program specialization optimizes programs for known valúes of the input. It is often the case that the set of possible input valúes is unknown, or this set is infinite. However, a form of specialization can still be performed in such cases by means of abstract interpretation, specialization then being with respect to abstract valúes (substitutions), rather than concrete ones. We study the múltiple specialization of logic programs based on abstract interpretation. This involves in principie, and based on information from global analysis, generating several versions of a program predicate for different uses of such predicate, optimizing these versions, and, finally, producing a new, "multiply specialized" program. While múltiple specialization has received theoretical attention, little previous evidence exists on its practicality. In this paper we report on the incorporation of múltiple specialization in a parallelizing compiler and quantify its effects. A novel approach to the design and implementation of the specialization system is proposed. The resulting implementation techniques result in identical specializations to those of the best previously proposed techniques but require little or no modification of some existing abstract interpreters. Our results show that, using the proposed techniques, the resulting "abstract múltiple specialization" is indeed a relevant technique in practice. In particular, in the parallelizing compiler application, a good number of run-time tests are eliminated and invariants extracted automatically from loops, resulting generally in lower overheads and in several cases in increased speedups.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Polyvariant specialization allows generating múltiple versions of a procedure, which can then be separately optimized for different uses. Since allowing a high degree of polyvariance often results in more optimized code, polyvariant specializers, such as most partial evaluators, can genérate a large number of versions. This can produce unnecessarily large residual programs. Also, large programs can be slower due to cache miss effects. A possible solution to this problem is to introduce a minimization step which identifies sets of equivalent versions, and replace all occurrences of such versions by a single one. In this work we present a unifying view of the problem of superfluous polyvariance. It includes both partial deduction and abstract múltiple specialization. As regards partial deduction, we extend existing approaches in several ways. First, previous work has dealt with puré logic programs and a very limited class of builtins. Herein we propose an extensión to traditional characteristic trees which can be used in the presence of calis to external predicates. This includes all builtins, librarles, other user modules, etc. Second, we propose the possibility of collapsing versions which are not strictly equivalent. This allows trading time for space and can be useful in the context of embedded and pervasive systems. This is done by residualizing certain computations for external predicates which would otherwise be performed at specialization time. Third, we provide an experimental evaluation of the potential gains achievable using minimization which leads to interesting conclusions.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The relationship between abstract interpretation [2] and partial evaluation [5] has received considerable attention and (partial) integrations have been proposed starting from both the partial deduction (see e.g. [6] and its references) and abstract interpretation perspectives. Abstract interpretation-based analyzers (such as the CiaoPP analyzer [9,4]) generally compute a program analysis graph [1] in order to propagate (abstract) call and success information by performing fixpoint computations when needed. On the other hand, partial deduction methods [7] incorporate powerful techniques for on-line specialization including (concrete) call propagation and unfolding.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Program specialization optimizes programs for known valúes of the input. It is often the case that the set of possible input valúes is unknown, or this set is infinite. However, a form of specialization can still be performed in such cases by means of abstract interpretation, specialization then being with respect to abstract valúes (substitutions), rather than concrete ones. This paper reports on the application of abstract múltiple specialization to automatic program parallelization in the &-Prolog compiler. Abstract executability, the main concept underlying abstract specialization, is formalized, the design of the specialization system presented, and a non-trivial example of specialization in automatic parallelization is given.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We study the múltiple specialization of logic programs based on abstract interpretation. This involves in general generating several versions of a program predícate for different uses of such predícate, making use of information obtained from global analysis performed by an abstract interpreter, and finally producing a new, "multiply specialized" program. While the topic of múltiple specialization of logic programs has received considerable theoretical attention, it has never been actually incorporated in a compiler and its effects quantified. We perform such a study in the context of a parallelizing compiler and show that it is indeed a relevant technique in practice. Also, we propose an implementation technique which has the same power as the strongest of the previously proposed techniques but requires little or no modification of an existing abstract interpreter.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper presents a technique for achieving a class of optimizations related to the reduction of checks within cycles. The technique uses both Program Transformation and Abstract Interpretation. After a ñrst pass of an abstract interpreter which detects simple invariants, program transformation is used to build a hypothetical situation that simpliñes some predicates that should be executed within the cycle. This transformation implements the heuristic hypothesis that once conditional tests hold they may continué doing so recursively. Specialized versions of predicates are generated to detect and exploit those cases in which the invariance may hold. Abstract interpretation is then used again to verify the truth of such hypotheses and conñrm the proposed simpliñcation. This allows optimizations that go beyond those possible with only one pass of the abstract interpreter over the original program, as is normally the case. It also allows selective program specialization using a standard abstract interpreter not speciñcally designed for this purpose, thus simplifying the design of this already complex module of the compiler. In the paper, a class of programs amenable to such optimization is presented, along with some examples and an evaluation of the proposed techniques in some application áreas such as floundering detection and reducing run-time tests in automatic logic program parallelization. The analysis of the examples presented has been performed automatically by an implementation of the technique using existing abstract interpretation and program transformation tools.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The aim of program specialization is to optimize programs by exploiting certain knowledge about the context in which the program will execute. There exist many program manipulation techniques which allow specializing the program in different ways. Among them, one of the best known techniques is partial evaluation, often referred to simply as program specialization, which optimizes programs by specializing them for (partially) known input data. In this work we describe abstract specialization, a technique whose main features are: (1) specialization is performed with respect to "abstract" valúes rather than "concrete" ones, and (2) abstract interpretation rather than standard interpretation of the program is used in order to propágate information about execution states. The concept of abstract specialization is at the heart of the specialization system in CiaoPP, the Ciao system preprocessor. In this paper we present a unifying view of the different specialization techniques used in CiaoPP and discuss their potential applications by means of examples. The applications discussed include program parallelization, optimization of dynamic scheduling (concurreney), and integration of partial evaluation techniques.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The relationship between abstract interpretation and partial deduction has received considerable attention and (partial) integrations have been proposed starting from both the partial deduction and abstract interpretation perspectives. In this work we present what we argüe is the first fully described generic algorithm for efñcient and precise integration of abstract interpretation and partial deduction. Taking as starting point state-of-the-art algorithms for context-sensitive, polyvariant abstract interpretation and (abstract) partial deduction, we present an algorithm which combines the best of both worlds. Key ingredients include the accurate success propagation inherent to abstract interpretation and the powerful program transformations achievable by partial deduction. In our algorithm, the calis which appear in the analysis graph are not analyzed w.r.t. the original definition of the procedure but w.r.t. specialized definitions of these procedures. Such specialized definitions are obtained by applying both unfolding and abstract executability. Our framework is parametric w.r.t. different control strategies and abstract domains. Different combinations of such parameters correspond to existing algorithms for program analysis and specialization. Simultaneously, our approach opens the door to the efñcient computation of strictly more precise results than those achievable by each of the individual techniques. The algorithm is now one of the key components of the CiaoPP analysis and specialization system.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Separating programs into modules is a well-known technique which has proven very useful in program development and maintenance. Starting by introducing a number of possible scenarios, in this paper we study different issues which appear when developing analysis and specialization techniques for modular logic programming. We discuss a number of design alternatives and their consequences for the different scenarios considered and describe where applicable the decisions made in the Ciao system analyzer and specializer. In our discussion we use the module system of Ciao Prolog. This is both for concreteness and because Ciao Prolog is a second-generation Prolog system which has been designed with global analysis and specialization in mind, and which has a strict module system. The aim of this work is not to provide a theoretical basis on modular analysis and specialization, but rather to discuss some interesting practical issues.