13 resultados para General-purpose computing on graphics processing units (GPGPU)

em Aston University Research Archive


Relevância:

100.00% 100.00%

Publicador:

Resumo:

The success of mainstream computing is largely due to the widespread availability of general-purpose architectures and of generic approaches that can be used to solve real-world problems cost-effectively and across a broad range of application domains. In this chapter, we propose that a similar generic framework is used to make the development of autonomic solutions cost effective, and to establish autonomic computing as a major approach to managing the complexity of today’s large-scale systems and systems of systems. To demonstrate the feasibility of general-purpose autonomic computing, we introduce a generic autonomic computing framework comprising a policy-based autonomic architecture and a novel four-step method for the effective development of self-managing systems. A prototype implementation of the reconfigurable policy engine at the core of our architecture is then used to develop autonomic solutions for case studies from several application domains. Looking into the future, we describe a methodology for the engineering of self-managing systems that extends and generalises our autonomic computing framework further.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In Fourier domain optical coherence tomography (FD-OCT), a large amount of interference data needs to be resampled from the wavelength domain to the wavenumber domain prior to Fourier transformation. We present an approach to optimize this data processing, using a graphics processing unit (GPU) and parallel processing algorithms. We demonstrate an increased processing and rendering rate over that previously reported by using GPU paged memory to render data in the GPU rather than copying back to the CPU. This avoids unnecessary and slow data transfer, enabling a processing and display rate of well over 524,000 A-scan/s for a single frame. To the best of our knowledge this is the fastest processing demonstrated to date and the first time that FD-OCT processing and rendering has been demonstrated entirely on a GPU.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Although advertising is pervasive in our daily, it proves to be not necessarily efficient all the times due to bad conditions or bad contexts of reception. Indeed, the communication process might be jeopardized at its very last stage because of advertising exposure quality. However critical it may be, ad exposure quality is not very much examined by researchers or practitioners. In this paper, we investigate how tiredness combined with ad complexity might influence the way consumers extract and process ad elements. Investigating tiredness is useful because it is a common daily state experienced by everyone at various moments of the day. And although it might drastically alter ad reception, it has not been studied in advertising for the moment. In this regards, we observe eye movement patterns of consumers viewing simple or complex advertisements being tired or not. We surprisingly find that tired subjects viewing complex ads don’t adopt a lessening effort visual strategy. They rather use a resource demanding one. We assume that the Sustained Attention strategy occurring is a kind of adaptive strategy allowing to deal with an anticipated lack of resource.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A General Sales Agent (GSA) is an airline's outsourcing counter part that markets and manages cargo services. An empirical investigation is undertaken to ascertain whether GSAs contribute economically to the air cargo industry using three 'litmus test' indicators:1) contribution to the airline's sales and profitability by expanding o perating networks; 2) viability as a marketing option for emerging or struggling airlines to help cut operating costs to reduce prices; 3) cost-effective GSAs were found to establish an airline's market presence through wide network coverage and good local knowledge, leading to an expansion of airline's operating networks and generating greater sales revenue. Copyright © 2012 Inderscience Enterprises Ltd.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We compared reading acquisition in English and Italian children up to late primary school analyzing RTs and errors as a function of various psycholinguistic variables and changes due to experience. Our results show that reading becomes progressively more reliant on larger processing units with age, but that this is modulated by consistency of the language. In English, an inconsistent orthography, reliance on larger units occurs earlier on and it is demonstrated by faster RTs, a stronger effect of lexical variables and lack of length effect (by fifth grade). However, not all English children are able to master this mode of processing yielding larger inter-individual variability. In Italian, a consistent orthography, reliance on larger units occurs later and it is less pronounced. This is demonstrated by larger length effects which remain significant even in older children and by larger effects of a global factor (related to speed of orthographic decoding) explaining changes of performance across ages. Our results show the importance of considering not only overall performance, but inter-individual variability and variability between conditions when interpreting cross-linguistic differences.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The purpose of this research was to investigate the effects of Processing Instruction (VanPatten, 1996, 2007), as an input-based model for teaching second language grammar, on Syrian learners’ processing abilities. The present research investigated the effects of Processing Instruction on the acquisition of English relative clauses by Syrian learners in the form of a quasi-experimental design. Three separate groups were involved in the research (Processing Instruction, Traditional Instruction and a Control Group). For assessment, a pre-test, a direct post-test and a delayed post-test were used as main tools for eliciting data. A questionnaire was also distributed to participants in the Processing Instruction group to give them the opportunity to give feedback in relation to the treatment they received in comparison with the Traditional Instruction they are used to. Four hypotheses were formulated on the possible effectivity of Processing Instruction on Syrian learners’ linguistic system. It was hypothesised that Processing Instruction would improve learners’ processing abilities leading to an improvement in learners’ linguistic system. This was expected to lead to a better performance when it comes to the comprehension and production of English relative clauses. The main source of data was analysed statistically using the ANOVA test. Cohen’s d calculations were also used to support the ANOVA test. Cohen’s d showed the magnitude of effects of the three treatments. Results of the analysis showed that both Processing Instruction and Traditional Instruction groups had improved after treatment. However, the Processing Instruction Group significantly outperformed the other two groups in the comprehension of relative clauses. The analysis concluded that Processing Instruction is a useful tool for instructing relative clauses to Syrian learners. This was enhanced by participants’ responses to the questionnaire as they were in favour of Processing Instruction, rather than Traditional Instruction. This research has theoretical and pedagogical implications. Theoretically, the study showed support for the Input hypothesis. That is, it was shown that Processing Instruction had a positive effect on input processing as it affected learners’ linguistic system. This was reflected in learners’ performance where learners were able to produce a structure which they had not been asked to produce. Pedagogically, the present research showed that Processing Instruction is a useful tool for teaching English grammar in the context where the experiment was carried out, as it had a large effect on learners’ performance.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Based on recent advances in autonomic computing, we propose a methodology for the cost-effective development of self-managing systems starting from a model of the resources to be managed and using a general-purpose autonomic architecture.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Huge advertising budgets are invested by firms to reach and convince potential consumers to buy their products. To optimize these investments, it is fundamental not only to ensure that appropriate consumers will be reached, but also that they will be in appropriate reception conditions. Marketing research has focused on the way consumers react to advertising, as well as on some individual and contextual factors that could mediate or moderate the ad impact on consumers (e.g. motivation and ability to process information or attitudes toward advertising). Nevertheless, a factor that potentially influences consumers’ advertising reactions has not yet been studied in marketing research: fatigue. Fatigue can yet impact key variables of advertising processing, such as cognitive resources availability (Lieury 2004). Fatigue is felt when the body warns to stop an activity (or inactivity) to have some rest, allowing the individual to compensate for fatigue effects. Dittner et al. (2004) defines it as “the state of weariness following a period of exertion, mental or physical, characterized by a decreased capacity for work and reduced efficiency to respond to stimuli.’’ It signals that resources will lack if we continue with the ongoing activity. According to Schmidtke (1969), fatigue leads to troubles in information reception, in perception, in coordination, in attention getting, in concentration and in thinking. In addition, for Markle (1984) fatigue generates a decrease in memory, and in communication ability, whereas it increases time reaction, and number of errors. Thus, fatigue may have large effects on advertising processing. We suggest that fatigue determines the level of available resources. Some research about consumer responses to advertising claim that complexity is a fundamental element to take into consideration. Complexity determines the cognitive efforts the consumer must provide to understand the message (Putrevu et al. 2004). Thus, we suggest that complexity determines the level of required resources. To study this complex question about need and provision of cognitive resources, we draw upon Resource Matching Theory. Anand and Sternthal (1989, 1990) are the first to state the Resource Matching principle, saying that an ad is most persuasive when the resources required to process it match the resources the viewer is willing and able to provide. They show that when the required resources exceed those available, the message is not entirely processed by the consumer. And when there are too many available resources comparing to those required, the viewer elaborates critical or unrelated thoughts. According to the Resource Matching theory, the level of resource demanded by an ad can be high or low, and is mostly determined by the ad’s layout (Peracchio and Myers-Levy, 1997). We manipulate the level of required resources using three levels of ad complexity (low – high – extremely high). On the other side, the resource availability of an ad viewer is determined by lots of contextual and individual variables. We manipulate the level of available resources using two levels of fatigue (low – high). Tired viewers want to limit the processing effort to minimal resource requirements by making heuristics, forming overall impression at first glance. It will be easier for them to decode the message when ads are very simple. On the contrary, the most effective ads for viewers who are not tired are complex enough to draw their attention and fully use their resources. They will use more analytical strategies, looking at the details of the ad. However, if ads are too complex, they will be too difficult to understand. The viewer will be discouraged to process information and will overlook the ad. The objective of our research is to study fatigue as a moderating variable of advertising information processing. We run two experimental studies to assess the effect of fatigue on visual strategies, comprehension, persuasion and memorization. In study 1, thirty-five undergraduate students enrolled in a marketing research course participated in the experiment. The experimental design is 2 (tiredness level: between subjects) x 3 (ad complexity level: within subjects). Participants were randomly assigned a schedule time (morning: 8-10 am or evening: 10-12 pm) to perform the experiment. We chose to test subjects at various moments of the day to obtain maximum variance in their fatigue level. We use Morningness / Eveningness tendency of participants (Horne & Ostberg, 1976) as a control variable. We assess fatigue level using subjective measures - questionnaire with fatigue scales - and objective measures - reaction time and number of errors. Regarding complexity levels, we have designed our own ads in order to keep aspects other than complexity equal. We ran a pretest using the Resource Demands scale (Keller and Bloch 1997) and by rating them on complexity like Morrison and Dainoff (1972) to check for our complexity manipulation. We found three significantly different levels. After having completed the fatigue scales, participants are asked to view the ads on a screen, while their eye movements are recorded by the eye-tracker. Eye-tracking allows us to find out patterns of visual attention (Pieters and Warlop 1999). We are then able to infer specific respondents’ visual strategies according to their level of fatigue. Comprehension is assessed with a comprehension test. We collect measures of attitude change for persuasion and measures of recall and recognition at various points of time for memorization. Once the effect of fatigue will be determined across the student population, it is interesting to account for individual differences in fatigue severity and perception. Therefore, we run study 2, which is similar to the previous one except for the design: time of day is now within-subjects and complexity becomes between-subjects

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The focus of this study is development of parallelised version of severely sequential and iterative numerical algorithms based on multi-threaded parallel platform such as a graphics processing unit. This requires design and development of a platform-specific numerical solution that can benefit from the parallel capabilities of the chosen platform. Graphics processing unit was chosen as a parallel platform for design and development of a numerical solution for a specific physical model in non-linear optics. This problem appears in describing ultra-short pulse propagation in bulk transparent media that has recently been subject to several theoretical and numerical studies. The mathematical model describing this phenomenon is a challenging and complex problem and its numerical modeling limited on current modern workstations. Numerical modeling of this problem requires a parallelisation of an essentially serial algorithms and elimination of numerical bottlenecks. The main challenge to overcome is parallelisation of the globally non-local mathematical model. This thesis presents a numerical solution for elimination of numerical bottleneck associated with the non-local nature of the mathematical model. The accuracy and performance of the parallel code is identified by back-to-back testing with a similar serial version.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We argue that, for certain constrained domains, elaborate model transformation technologies-implemented from scratch in general-purpose programming languages-are unnecessary for model-driven engineering; instead, lightweight configuration of commercial off-the-shelf productivity tools suffices. In particular, in the CancerGrid project, we have been developing model-driven techniques for the generation of software tools to support clinical trials. A domain metamodel captures the community's best practice in trial design. A scientist authors a trial protocol, modelling their trial by instantiating the metamodel; customized software artifacts to support trial execution are generated automatically from the scientist's model. The metamodel is expressed as an XML Schema, in such a way that it can be instantiated by completing a form to generate a conformant XML document. The same process works at a second level for trial execution: among the artifacts generated from the protocol are models of the data to be collected, and the clinician conducting the trial instantiates such models in reporting observations-again by completing a form to create a conformant XML document, representing the data gathered during that observation. Simple standard form management tools are all that is needed. Our approach is applicable to a wide variety of information-modelling domains: not just clinical trials, but also electronic public sector computing, customer relationship management, document workflow, and so on. © 2012 Springer-Verlag.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This thesis describes advances in the characterisation, calibration and data processing of optical coherence tomography (OCT) systems. Femtosecond (fs) laser inscription was used for producing OCT-phantoms. Transparent materials are generally inert to infra-red radiations, but with fs lasers material modification occurs via non-linear processes when the highly focused light source interacts with the materials. This modification is confined to the focal volume and is highly reproducible. In order to select the best inscription parameters, combination of different inscription parameters were tested, using three fs laser systems, with different operating properties, on a variety of materials. This facilitated the understanding of the key characteristics of the produced structures with the aim of producing viable OCT-phantoms. Finally, OCT-phantoms were successfully designed and fabricated in fused silica. The use of these phantoms to characterise many properties (resolution, distortion, sensitivity decay, scan linearity) of an OCT system was demonstrated. Quantitative methods were developed to support the characterisation of an OCT system collecting images from phantoms and also to improve the quality of the OCT images. Characterisation methods include the measurement of the spatially variant resolution (point spread function (PSF) and modulation transfer function (MTF)), sensitivity and distortion. Processing of OCT data is a computer intensive process. Standard central processing unit (CPU) based processing might take several minutes to a few hours to process acquired data, thus data processing is a significant bottleneck. An alternative choice is to use expensive hardware-based processing such as field programmable gate arrays (FPGAs). However, recently graphics processing unit (GPU) based data processing methods have been developed to minimize this data processing and rendering time. These processing techniques include standard-processing methods which includes a set of algorithms to process the raw data (interference) obtained by the detector and generate A-scans. The work presented here describes accelerated data processing and post processing techniques for OCT systems. The GPU based processing developed, during the PhD, was later implemented into a custom built Fourier domain optical coherence tomography (FD-OCT) system. This system currently processes and renders data in real time. Processing throughput of this system is currently limited by the camera capture rate. OCTphantoms have been heavily used for the qualitative characterization and adjustment/ fine tuning of the operating conditions of OCT system. Currently, investigations are under way to characterize OCT systems using our phantoms. The work presented in this thesis demonstrate several novel techniques of fabricating OCT-phantoms and accelerating OCT data processing using GPUs. In the process of developing phantoms and quantitative methods, a thorough understanding and practical knowledge of OCT and fs laser processing systems was developed. This understanding leads to several novel pieces of research that are not only relevant to OCT but have broader importance. For example, extensive understanding of the properties of fs inscribed structures will be useful in other photonic application such as making of phase mask, wave guides and microfluidic channels. Acceleration of data processing with GPUs is also useful in other fields.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Femtosecond laser microfabrication has emerged over the last decade as a 3D flexible technology in photonics. Numerical simulations provide an important insight into spatial and temporal beam and pulse shaping during the course of extremely intricate nonlinear propagation (see e.g. [1,2]). Electromagnetics of such propagation is typically described in the form of the generalized Non-Linear Schrdinger Equation (NLSE) coupled with Drude model for plasma [3]. In this paper we consider a multi-threaded parallel numerical solution for a specific model which describes femtosecond laser pulse propagation in transparent media [4, 5]. However our approach can be extended to similar models. The numerical code is implemented in NVIDIA Graphics Processing Unit (GPU) which provides an effitient hardware platform for multi-threded computing. We compare the performance of the described below parallel code implementated for GPU using CUDA programming interface [3] with a serial CPU version used in our previous papers [4,5]. © 2011 IEEE.