908 resultados para Task-to-core mapping


Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this work, novel imaging designs with a single optical surface (either refractive or reflective) are presented. In some of these designs, both object and image shapes are given but mapping from object to image is obtained as a result of the design. In other designs, not only the mapping is obtained in the design process, but also the shape of the object is found. In the examples considered, the image is virtual and located at infinity and is seen from known pupil, which can emulate a human eye. In the first introductory part, 2D designs have been done using three different design methods: a SMS design, a compound Cartesian oval surface, and a differential equation method for the limit case of small pupil. At the point-size pupil limit, it is proven that these three methods coincide. In the second part, previous 2D designs are extended to 3D by rotation and the astigmatism of the image has been studied. As an advanced variation, the differential equation method is used to provide the freedom to control the tangential rays and sagittal rays simultaneously. As a result, designs without astigmatism (at the small pupil limit) on a curved object surface have been obtained. Finally, this anastigmatic differential equation method has been extended to 3D for the general case, in which freeform surfaces are designed.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Anastigmatic imaging of an object to an image surfaces without the point-to-point mapping prescription and using a single optical surface is analyzed in 2D and 3D geometries (free-form and rotational-symmetric). Several design techniques are shown.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The current study tested two competing models of Attention-Deficit/Hyperactivity Disorder (AD/HD), the inhibition and state regulation theories, by conducting fine-grained analyses of the Stop-Signal Task and another putative measure of behavioral inhibition, the Gordon Continuous Performance Test (G-CPT), in a large sample of children and adolescents. The inhibition theory posits that performance on these tasks reflects increased difficulties for AD/HD participants to inhibit prepotent responses. The model predicts that putative stop-signal reaction time (SSRT) group differences on the Stop-Signal Task will be primarily related to AD/HD participants requiring more warning than control participants to inhibit to the stop-signal and emphasizes the relative importance of commission errors, particularly "impulsive" type commissions, over other error types on the G-CPT. The state regulation theory, on the other hand, proposes response variability due to difficulties maintaining an optimal state of arousal as the primary deficit in AD/HD. This model predicts that SSRT differences will be more attributable to slower and/or more variable reaction time (RT) in the AD/HD group, as opposed to reflecting inhibitory deficits. State regulation assumptions also emphasize the relative importance of omission errors and "slow processing" type commissions over other error types on the G-CPT. Overall, results of Stop-Signal Task analyses were more supportive of state regulation predictions and showed that greater response variability (i.e., SDRT) in the AD/HD group was not reducible to slow mean reaction time (MRT) and that response variability made a larger contribution to increased SSRT in the AD/HD group than inhibitory processes. Examined further, ex-Gaussian analyses of Stop-Signal Task go-trial RT distributions revealed that increased variability in the AD/HD group was not due solely to a few excessively long RTs in the tail of the AD/HD distribution (i.e., tau), but rather indicated the importance of response variability throughout AD/HD group performance on the Stop-Signal Task, as well as the notable sensitivity of ex-Gaussian analyses to variability in data screening procedures. Results of G-CPT analyses indicated some support for the inhibition model, although error type analyses failed to further differentiate the theories. Finally, inclusion of primary variables of interest in exploratory factor analysis with other neurocognitive predictors of AD/HD indicated response variability as a separable construct and further supported its role in Stop-Signal Task performance. Response variability did not, however, make a unique contribution to the prediction of AD/HD symptoms beyond measures of motor processing speed in multiple deficit regression analyses. Results have implications for the interpretation of the processes reflected in widely-used variables in the AD/HD literature, as well as for the theoretical understanding of AD/HD.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper, addresses the problem of novelty detection in the case that the observed data is a mixture of a known 'background' process contaminated with an unknown other process, which generates the outliers, or novel observations. The framework we describe here is quite general, employing univariate classification with incomplete information, based on knowledge of the distribution (the 'probability density function', 'pdf') of the data generated by the 'background' process. The relative proportion of this 'background' component (the 'prior' 'background' 'probability), the 'pdf' and the 'prior' probabilities of all other components are all assumed unknown. The main contribution is a new classification scheme that identifies the maximum proportion of observed data following the known 'background' distribution. The method exploits the Kolmogorov-Smirnov test to estimate the proportions, and afterwards data are Bayes optimally separated. Results, demonstrated with synthetic data, show that this approach can produce more reliable results than a standard novelty detection scheme. The classification algorithm is then applied to the problem of identifying outliers in the SIC2004 data set, in order to detect the radioactive release simulated in the 'oker' data set. We propose this method as a reliable means of novelty detection in the emergency situation which can also be used to identify outliers prior to the application of a more general automatic mapping algorithm. © Springer-Verlag 2007.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

* The work is partly supported by RFFI grant 08-07-00062-a

Relevância:

100.00% 100.00%

Publicador:

Resumo:

OBJECTIVE: To examine the relationship between reward sensitivity and self-reported apathy in stroke patients and to investigate the neuroanatomical correlates of both reward sensitivity and apathy. METHODS: In this prospective study, 55 chronic stroke patients were administered a questionnaire to assess apathy and a laboratory task to examine reward sensitivity by measuring motivationally driven behavior ("reinforcement-related speeding"). Fifteen participants without brain damage served as controls for the laboratory task. Negative mood, working memory, and global cognitive functioning were also measured to determine whether reward insensitivity and apathy were secondary to cognitive impairments or negative mood. Voxel-based lesion-symptom mapping was used to explore the neuroanatomical substrates of reward sensitivity and apathy. RESULTS: Participants showed reinforcement-related speeding in the highly reinforced condition of the laboratory task. However, this effect was significant for the controls only. For patients, poorer reward sensitivity was associated with greater self-reported apathy (p < 0.05) beyond negative mood and after lesion size was controlled for. Neither apathy nor reward sensitivity was related to working memory or global cognitive functioning. Voxel-based lesion-symptom mapping showed that damage to the ventral putamen and globus pallidus, dorsal thalamus, and left insula and prefrontal cortex was associated with poorer reward sensitivity. The putamen and thalamus were also involved in self-reported apathy. CONCLUSIONS: Poor reward sensitivity in stroke patients with damage to the ventral basal ganglia, dorsal thalamus, insula, or prefrontal cortex constitutes a core feature of apathy. These results provide valuable insight into the neural mechanisms and brain substrate underlying apathy.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Eighty-five of 99 Iowa counties were declared Presidential Disaster Areas for Public Assistance and/orIndividual Assistance as a result of the tornadoes, storms, and floods over the incident period May 25 through August 13, 2008. Response dominated the state’s attention for weeks, with a transition to recovery as the local situations warranted. The widespread damage and severity of the impact on Iowans and their communities required a statewide effort to continue moving forward despite being surrounded by adversity. By all accounts, it will require years for the state to recover from these disasters. With an eye toward the future, recovery is underway across Iowa. As part of the Rebuild Iowa efforts, the Long Term Recovery Planning Task Force was charged with responsibilities somewhat different from other topical Task Force assignments. Rather than assess damage and report on how the state might address immediate needs, the Long Term Recovery Planning Task Force is directed to discuss and discern the best approach to the lengthy recovery process. Certainly, the Governor and Lieutenant Governor expect the task to be difficult; when planning around so many critical issues and overwhelming needs, it is challenging to think to the future, rather than to rise to the current day’s needs.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Spatial data analysis mapping and visualization is of great importance in various fields: environment, pollution, natural hazards and risks, epidemiology, spatial econometrics, etc. A basic task of spatial mapping is to make predictions based on some empirical data (measurements). A number of state-of-the-art methods can be used for the task: deterministic interpolations, methods of geostatistics: the family of kriging estimators (Deutsch and Journel, 1997), machine learning algorithms such as artificial neural networks (ANN) of different architectures, hybrid ANN-geostatistics models (Kanevski and Maignan, 2004; Kanevski et al., 1996), etc. All the methods mentioned above can be used for solving the problem of spatial data mapping. Environmental empirical data are always contaminated/corrupted by noise, and often with noise of unknown nature. That's one of the reasons why deterministic models can be inconsistent, since they treat the measurements as values of some unknown function that should be interpolated. Kriging estimators treat the measurements as the realization of some spatial randomn process. To obtain the estimation with kriging one has to model the spatial structure of the data: spatial correlation function or (semi-)variogram. This task can be complicated if there is not sufficient number of measurements and variogram is sensitive to outliers and extremes. ANN is a powerful tool, but it also suffers from the number of reasons. of a special type ? multiplayer perceptrons ? are often used as a detrending tool in hybrid (ANN+geostatistics) models (Kanevski and Maignank, 2004). Therefore, development and adaptation of the method that would be nonlinear and robust to noise in measurements, would deal with the small empirical datasets and which has solid mathematical background is of great importance. The present paper deals with such model, based on Statistical Learning Theory (SLT) - Support Vector Regression. SLT is a general mathematical framework devoted to the problem of estimation of the dependencies from empirical data (Hastie et al, 2004; Vapnik, 1998). SLT models for classification - Support Vector Machines - have shown good results on different machine learning tasks. The results of SVM classification of spatial data are also promising (Kanevski et al, 2002). The properties of SVM for regression - Support Vector Regression (SVR) are less studied. First results of the application of SVR for spatial mapping of physical quantities were obtained by the authorsin for mapping of medium porosity (Kanevski et al, 1999), and for mapping of radioactively contaminated territories (Kanevski and Canu, 2000). The present paper is devoted to further understanding of the properties of SVR model for spatial data analysis and mapping. Detailed description of the SVR theory can be found in (Cristianini and Shawe-Taylor, 2000; Smola, 1996) and basic equations for the nonlinear modeling are given in section 2. Section 3 discusses the application of SVR for spatial data mapping on the real case study - soil pollution by Cs137 radionuclide. Section 4 discusses the properties of the modelapplied to noised data or data with outliers.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Anhedonia, the loss of pleasure or interest in previously rewarding stimuli, is a core feature of major depression. While theorists have argued that anhedonia reflects a reduced capacity to experience pleasure, evidence is mixed as to whether anhedonia is caused by a reduction in hedonic capacity. An alternative explanation is that anhedonia is due to the inability to sustain positive affect across time. Using positive images, we used an emotion regulation task to test whether individuals with depression are unable to sustain activation in neural circuits underlying positive affect and reward. While up-regulating positive affect, depressed individuals failed to sustain nucleus accumbens activity over time compared with controls. This decreased capacity was related to individual differences in self-reported positive affect. Connectivity analyses further implicated the fronto-striatal network in anhedonia. These findings support the hypothesis that anhedonia in depressed patients reflects the inability to sustain engagement of structures involved in positive affect and reward.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The work reported in this paper is motivated by the fact that there is a need to apply autonomic computing concepts to parallel computing systems. Advancing on prior work based on intelligent cores [36], a swarm-array computing approach, this paper focuses on ‘Intelligent agents’ another swarm-array computing approach in which the task to be executed on a parallel computing core is considered as a swarm of autonomous agents. A task is carried to a computing core by carrier agents and is seamlessly transferred between cores in the event of a predicted failure, thereby achieving self-ware objectives of autonomic computing. The feasibility of the proposed swarm-array computing approach is validated on a multi-agent simulator.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Recent research in multi-agent systems incorporate fault tolerance concepts, but does not explore the extension and implementation of such ideas for large scale parallel computing systems. The work reported in this paper investigates a swarm array computing approach, namely 'Intelligent Agents'. A task to be executed on a parallel computing system is decomposed to sub-tasks and mapped onto agents that traverse an abstracted hardware layer. The agents intercommunicate across processors to share information during the event of a predicted core/processor failure and for successfully completing the task. The feasibility of the approach is validated by simulations on an FPGA using a multi-agent simulator, and implementation of a parallel reduction algorithm on a computer cluster using the Message Passing Interface.