50 resultados para General-purpose computing on graphics processing units (GPGPU)


Relevância:

100.00% 100.00%

Publicador:

Resumo:

We investigated the on-line processing of unaccusative and unergative sentences in a group of eight Greek-speaking individuals diagnosed with Broca aphasia and a group of language-unimpaired subjects used as the baseline. The processing of unaccusativity refers to the reactivation of the postverbal trace by retrieving the mnemonic representation of the verb’s syntactically defined antecedent provided in the early part of the sentence. Our results demonstrate that the Broca group showed selective reactivation of the antecedent for the unaccusatives. We consider several interpretations for our data, including explanations focusing on the transitivization properties of nonactive and active voice-alternating unaccusatives, the costly procedure claimed to underlie the parsing of active nonvoice-alternating unaccusatives, and the animacy of the antecedent modulating the syntactic choices of the patients.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Cloud computing is usually regarded as being energy efficient and thus emitting less greenhouse gases (GHG) than traditional forms of computing. When the energy consumption of Microsoft’s cloud computing Office 365 (O365) and traditional Office 2010 (O2010) software suites were tested and modeled, some cloud services were found to consume more energy than the traditional form. The developed model in this research took into consideration the energy consumption at the three main stages of data transmission; data center, network, and end user device. Comparable products from each suite were selected and activities were defined for each product to represent a different computing type. Microsoft provided highly confidential data for the data center stage, while the networking and user device stages were measured directly. A new measurement and software apportionment approach was defined and utilized allowing the power consumption of cloud services to be directly measured for the user device stage. Results indicated that cloud computing is more energy efficient for Excel and Outlook which consumed less energy and emitted less GHG than the standalone counterpart. The power consumption of the cloud based Outlook (8%) and Excel (17%) was lower than their traditional counterparts. However, the power consumption of the cloud version of Word was 17% higher than its traditional equivalent. A third mixed access method was also measured for Word which emitted 5% more GHG than the traditional version. It is evident that cloud computing may not provide a unified way forward to reduce energy consumption and GHG. Direct conversion from the standalone package into the cloud provision platform can now consider energy and GHG emissions at the software development and cloud service design stage using the methods described in this research.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This cross-sectional study examines the role of L1-L2 differences and structural distance in the processing of gender and number agreement by English-speaking learners of Spanish at three different levels of proficiency. Preliminary results show that differences between the L1 and L2 impact L2 development, as sensitivity to gender agreement violations, as opposed to number agreement violations, emerges only in learners at advanced levels of proficiency. Results also show that the establishment of agreement dependencies is impacted by the structural distance between the agreeing elements for native speakers and for learners at intermediate and advanced levels of proficiency but not for low proficiency. The overall pattern of results suggests that the linguistic factors examined here impact development but do not constrain ultimate attainment; for advanced learners, results suggest that second language processing is qualitatively similar to native processing.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Rationale: Pramipexole, a D2/D3 dopamine receptor agonist, has been implicated in the development of impulse control disorders in patients with Parkinson's disease. Investigation of single doses of pramipexole in healthy participants in reward-based learning tasks has shown inhibition of the neural processing of reward, presumptively through stimulation of dopamine autoreceptors. Objectives: This study aims to examine the effects of pramipexole on the neural response to the passive receipt of rewarding and aversive sight and taste stimuli. Methods: We used functional magnetic resonance imaging to examine the neural responses to the sight and taste of pleasant (chocolate) and aversive (mouldy strawberry) stimuli in 16 healthy volunteers who received a single dose of pramipexole (0.25 mg) and placebo in a double-blind, within-subject, design. Results: Relative to placebo, pramipexole treatment reduced blood oxygen level-dependent activation to the chocolate stimuli in the areas known to play a key role in reward, including the ventromedial prefrontal cortex, the orbitofrontal cortex, striatum, thalamus and dorsal anterior cingulate cortex. Pramipexole also reduced activation to the aversive condition in the dorsal anterior cingulate cortex. There were no effects of pramipexole on the subjective ratings of the stimuli. Conclusions: Our results are consistent with an ability of acute, low-dose pramipexole to diminish dopamine-mediated responses to both rewarding and aversive taste stimuli, perhaps through an inhibitory action of D2/3 autoreceptors on phasic burst activity of midbrain dopamine neurones. The ability of pramipexole to inhibit aversive processing might potentiate its adverse behavioural effects and could also play a role in its proposed efficacy in treatment-resistant depression.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Compute grids are used widely in many areas of environmental science, but there has been limited uptake of grid computing by the climate modelling community, partly because the characteristics of many climate models make them difficult to use with popular grid middleware systems. In particular, climate models usually produce large volumes of output data, and running them usually involves complicated workflows implemented as shell scripts. For example, NEMO (Smith et al. 2008) is a state-of-the-art ocean model that is used currently for operational ocean forecasting in France, and will soon be used in the UK for both ocean forecasting and climate modelling. On a typical modern cluster, a particular one year global ocean simulation at 1-degree resolution takes about three hours when running on 40 processors, and produces roughly 20 GB of output as 50000 separate files. 50-year simulations are common, during which the model is resubmitted as a new job after each year. Running NEMO relies on a set of complicated shell scripts and command utilities for data pre-processing and post-processing prior to job resubmission. Grid Remote Execution (G-Rex) is a pure Java grid middleware system that allows scientific applications to be deployed as Web services on remote computer systems, and then launched and controlled as if they are running on the user's own computer. Although G-Rex is general purpose middleware it has two key features that make it particularly suitable for remote execution of climate models: (1) Output from the model is transferred back to the user while the run is in progress to prevent it from accumulating on the remote system and to allow the user to monitor the model; (2) The client component is a command-line program that can easily be incorporated into existing model work-flow scripts. G-Rex has a REST (Fielding, 2000) architectural style, which allows client programs to be very simple and lightweight and allows users to interact with model runs using only a basic HTTP client (such as a Web browser or the curl utility) if they wish. This design also allows for new client interfaces to be developed in other programming languages with relatively little effort. The G-Rex server is a standard Web application that runs inside a servlet container such as Apache Tomcat and is therefore easy to install and maintain by system administrators. G-Rex is employed as the middleware for the NERC1 Cluster Grid, a small grid of HPC2 clusters belonging to collaborating NERC research institutes. Currently the NEMO (Smith et al. 2008) and POLCOMS (Holt et al, 2008) ocean models are installed, and there are plans to install the Hadley Centre’s HadCM3 model for use in the decadal climate prediction project GCEP (Haines et al., 2008). The science projects involving NEMO on the Grid have a particular focus on data assimilation (Smith et al. 2008), a technique that involves constraining model simulations with observations. The POLCOMS model will play an important part in the GCOMS project (Holt et al, 2008), which aims to simulate the world’s coastal oceans. A typical use of G-Rex by a scientist to run a climate model on the NERC Cluster Grid proceeds as follows :(1) The scientist prepares input files on his or her local machine. (2) Using information provided by the Grid’s Ganglia3 monitoring system, the scientist selects an appropriate compute resource. (3) The scientist runs the relevant workflow script on his or her local machine. This is unmodified except that calls to run the model (e.g. with “mpirun”) are simply replaced with calls to "GRexRun" (4) The G-Rex middleware automatically handles the uploading of input files to the remote resource, and the downloading of output files back to the user, including their deletion from the remote system, during the run. (5) The scientist monitors the output files, using familiar analysis and visualization tools on his or her own local machine. G-Rex is well suited to climate modelling because it addresses many of the middleware usability issues that have led to limited uptake of grid computing by climate scientists. It is a lightweight, low-impact and easy-to-install solution that is currently designed for use in relatively small grids such as the NERC Cluster Grid. A current topic of research is the use of G-Rex as an easy-to-use front-end to larger-scale Grid resources such as the UK National Grid service.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The complexity of current and emerging architectures provides users with options about how best to use the available resources, but makes predicting performance challenging. In this work a benchmark-driven model is developed for a simple shallow water code on a Cray XE6 system, to explore how deployment choices such as domain decomposition and core affinity affect performance. The resource sharing present in modern multi-core architectures adds various levels of heterogeneity to the system. Shared resources often includes cache, memory, network controllers and in some cases floating point units (as in the AMD Bulldozer), which mean that the access time depends on the mapping of application tasks, and the core's location within the system. Heterogeneity further increases with the use of hardware-accelerators such as GPUs and the Intel Xeon Phi, where many specialist cores are attached to general-purpose cores. This trend for shared resources and non-uniform cores is expected to continue into the exascale era. The complexity of these systems means that various runtime scenarios are possible, and it has been found that under-populating nodes, altering the domain decomposition and non-standard task to core mappings can dramatically alter performance. To find this out, however, is often a process of trial and error. To better inform this process, a performance model was developed for a simple regular grid-based kernel code, shallow. The code comprises two distinct types of work, loop-based array updates and nearest-neighbour halo-exchanges. Separate performance models were developed for each part, both based on a similar methodology. Application specific benchmarks were run to measure performance for different problem sizes under different execution scenarios. These results were then fed into a performance model that derives resource usage for a given deployment scenario, with interpolation between results as necessary.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We present a general Multi-Agent System framework for distributed data mining based on a Peer-to-Peer model. Agent protocols are implemented through message-based asynchronous communication. The framework adopts a dynamic load balancing policy that is particularly suitable for irregular search algorithms. A modular design allows a separation of the general-purpose system protocols and software components from the specific data mining algorithm. The experimental evaluation has been carried out on a parallel frequent subgraph mining algorithm, which has shown good scalability performances.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The paper describes the implementation of an offline, low-cost Brain Computer Interface (BCI) alternative to more expensive commercial models. Using inexpensive general purpose clinical EEG acquisition hardware (Truscan32, Deymed Diagnostic) as the base unit, a synchronisation module was constructed to allow the EEG hardware to be operated precisely in time to allow for recording of automatically time stamped EEG signals. The synchronising module allows the EEG recordings to be aligned in stimulus time locked fashion for further processing by the classifier to establish the class of the stimulus, sample by sample. This allows for the acquisition of signals from the subject’s brain for the goal oriented BCI application based on the oddball paradigm. An appropriate graphical user interface (GUI) was constructed and implemented as the method to elicit the required responses (in this case Event Related Potentials or ERPs) from the subject.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The advantages of standard bus systems have been appreciated for many years. The ability to connect only those modules required to perform a given task has both technical and commercial advantages over a system with a fixed architecture which cannot be easily expanded or updated. Although such bus standards have proliferated in the microprocessor field, a general purpose low-cost standard for digital video processing has yet to gain acceptance. The paper describes the likely requirements of such a system, and discusses three currently available commercial systems. A new bus specification known as Vidibus, developed to fulfil these requirements, is presented. Results from applications already implemented using this real-time bus system are also given.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Research on incidental second language (L2) vocabulary acquisition through reading has claimed that repeated encounters with unfamiliar words and the relative elaboration of processing these words facilitate word learning. However, so far both variables have been investigated in isolation. To help close this research gap, the current study investigates the differential effects of the variables ‘word exposure frequency’ and ‘elaboration of word processingon the initial word learning and subsequent word retention of advanced learners of L2 English. Whereas results showed equal effects for both variables on initial word learning, subsequent word retention was more contingent on elaborate processing of form–meaning relationships than on word frequency. These results, together with those of the studies reviewed, suggest that processing words again after reading (input–output cycles) is superior to reading-only tasks. The findings have significant implications for adaptation and development of teaching materials that enhance L2 vocabulary learning.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Uncertainties associated with the representation of various physical processes in global climate models (GCMs) mean that, when projections from GCMs are used in climate change impact studies, the uncertainty propagates through to the impact estimates. A complete treatment of this ‘climate model structural uncertainty’ is necessary so that decision-makers are presented with an uncertainty range around the impact estimates. This uncertainty is often underexplored owing to the human and computer processing time required to perform the numerous simulations. Here, we present a 189-member ensemble of global river runoff and water resource stress simulations that adequately address this uncertainty. Following several adaptations and modifications, the ensemble creation time has been reduced from 750 h on a typical single-processor personal computer to 9 h of high-throughput computing on the University of Reading Campus Grid. Here, we outline the changes that had to be made to the hydrological impacts model and to the Campus Grid, and present the main results. We show that, although there is considerable uncertainty in both the magnitude and the sign of regional runoff changes across different GCMs with climate change, there is much less uncertainty in runoff changes for regions that experience large runoff increases (e.g. the high northern latitudes and Central Asia) and large runoff decreases (e.g. the Mediterranean). Furthermore, there is consensus that the percentage of the global population at risk to water resource stress will increase with climate change.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The International System of Units (SI) is founded on seven base units, the metre, kilogram, second, ampere, kelvin, mole and candela corresponding to the seven base quantities of length, mass, time, electric current, thermodynamic temperature, amount of substance and luminous intensity. At its 94th meeting in October 2005, the International Committee for Weights and Measures (CIPM) adopted a recommendation on preparative steps towards redefining the kilogram, ampere, kelvin and mole so that these units are linked to exactly known values of fundamental constants. We propose here that these four base units should be given new definitions linking them to exactly defined values of the Planck constant h, elementary charge e, Boltzmann constant k and Avogadro constant NA, respectively. This would mean that six of the seven base units of the SI would be defined in terms of true invariants of nature. In addition, not only would these four fundamental constants have exactly defined values but also the uncertainties of many of the other fundamental constants of physics would be either eliminated or appreciably reduced. In this paper we present the background and discuss the merits of these proposed changes, and we also present possible wordings for the four new definitions. We also suggest a novel way to define the entire SI explicitly using such definitions without making any distinction between base units and derived units. We list a number of key points that should be addressed when the new definitions are adopted by the General Conference on Weights and Measures (CGPM), possibly by the 24th CGPM in 2011, and we discuss the implications of these changes for other aspects of metrology.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The theory of harmonic force constant refinement calculations is reviewed, and a general-purpose program for force constant and normal coordinate calculations is described. The program, called ASYM20. is available through Quantum Chemistry Program Exchange. It will work on molecules of any symmetry containing up to 20 atoms and will produce results on a series of isotopomers as desired. The vibrational secular equations are solved in either nonredundant valence internal coordinates or symmetry coordinates. As well as calculating the (harmonic) vibrational wavenumbers and normal coordinates, the program will calculate centrifugal distortion constants, Coriolis zeta constants, harmonic contributions to the α′s. root-mean-square amplitudes of vibration, and other quantities related to gas electron-diffraction studies and thermodynamic properties. The program will work in either a predict mode, in which it calculates results from an input force field, or in a refine mode, in which it refines an input force field by least squares to fit observed data on the quantities mentioned above. Predicate values of the force constants may be included in the data set for a least-squares refinement. The program is written in FORTRAN for use on a PC or a mainframe computer. Operation is mainly controlled by steering indices in the input data file, but some interactive control is also implemented.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The rational for this review is to provide a coherent formulation of the cognitive neurochemistry of nicotine, with the aim of suggesting research and clinical applications. The first part is a comprehensive review of the empirical studies of the enhancing effects of nicotine on information processing, especially those on attentional and mnemonic processing. Then, these studies are put in the context of recent studies on the neurochemistry of nicotine and cholinergic drugs, in general. They suggest a positive effect of nicotine on processes acting on encoded material during the post acquisition phase, the process of consolidation. Thus, the involvement of nicotinic receptors in mnemonic processing is modulation of the excitability of neurons in the hippocampal formation to enable associative processing.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper presents results to indicate the potential applications of a direct connection between the human nervous system and a computer network. Actual experimental results obtained from a human subject study are given, with emphasis placed on the direct interaction between the human nervous system and possible extra-sensory input. An brief overview of the general state of neural implants is given, as well as a range of application areas considered. An overall view is also taken as to what may be possible with implant technology as a general purpose human-computer interface for the future.