961 resultados para General-purpose computing on graphics processing units (GPGPU)


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Femtosecond laser microfabrication has emerged over the last decade as a 3D flexible technology in photonics. Numerical simulations provide an important insight into spatial and temporal beam and pulse shaping during the course of extremely intricate nonlinear propagation (see e.g. [1,2]). Electromagnetics of such propagation is typically described in the form of the generalized Non-Linear Schrdinger Equation (NLSE) coupled with Drude model for plasma [3]. In this paper we consider a multi-threaded parallel numerical solution for a specific model which describes femtosecond laser pulse propagation in transparent media [4, 5]. However our approach can be extended to similar models. The numerical code is implemented in NVIDIA Graphics Processing Unit (GPU) which provides an effitient hardware platform for multi-threded computing. We compare the performance of the described below parallel code implementated for GPU using CUDA programming interface [3] with a serial CPU version used in our previous papers [4,5]. © 2011 IEEE.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The purpose of this investigation was to develop and implement a general purpose VLSI (Very Large Scale Integration) Test Module based on a FPGA (Field Programmable Gate Array) system to verify the mechanical behavior and performance of MEM sensors, with associated corrective capabilities; and to make use of the evolving System-C, a new open-source HDL (Hardware Description Language), for the design of the FPGA functional units. System-C is becoming widely accepted as a platform for modeling, simulating and implementing systems consisting of both hardware and software components. In this investigation, a Dual-Axis Accelerometer (ADXL202E) and a Temperature Sensor (TMP03) were used for the test module verification. Results of the test module measurement were analyzed for repeatability and reliability, and then compared to the sensor datasheet. Further study ideas were identified based on the study and results analysis. ASIC (Application Specific Integrated Circuit) design concepts were also being pursued.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This study investigated the effects of self-monitoring on the homework completion and accuracy rates of four, fourth-grade students with disabilities in an inclusive general education classroom. A multiple baseline across subjects design was utilized to examine four dependent variables: completion of spelling homework, accuracy of spelling homework, completion of math homework, accuracy of math homework. Data were collected and analyzed during baseline, three phases of intervention, and maintenance. ^ Throughout baseline and all phases, participants followed typical classroom procedures, brought their homework to school each day and gave it to the general education teacher. During Phase I of the intervention, participants self-monitored with a daily sheet at home and on the computer at school in the morning using KidTools (Fitzgerald & Koury, 2003); a student friendly, self-monitoring program. They also participated in brief daily conferences to review their self-monitoring sheets with the investigator, their special education teacher. Phase II followed the same steps except conferencing was reduced to two days a week, which were randomly selected by the researcher and Phase III conferencing was one random day a week. Maintenance data were taken over a two-to-three week period subsequent to the end of the intervention. ^ Results of this study demonstrated self-monitoring substantially improved spelling and math homework completion and accuracy rates of students with disabilities in an inclusive, general education classroom. On average, completion and accuracy rates were highest over baseline in Phase III. Self-monitoring led to higher percentages of completion and accuracy during each phase of the intervention compared to baseline, group percentages also rose slightly during maintenance. Therefore, results suggest self-monitoring leads to short-term maintenance in spelling and math homework completion and accuracy. ^ This study adds to the existing literature by investigating the effects of self-monitoring of homework for students with disabilities included in general education classrooms. Future research should consider selecting participants with other demographic characteristics, using peers for conferencing instead of the teacher, and the use of self-monitoring with other academic subjects (e.g., science, history). Additionally, future research could investigate the effects of each of the two self-monitoring components used alone, with or without the conferencing.^

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Model Driven Engineering uses the principle that code can automatically be generated from software models which would potentially save time and cost of development. By this methodology, a systems structure and behaviour can be expressed in more abstract, high level terms without some of the accidental complexity that the use of a general purpose language can bring. Models are the actual implementation of the system unlike in traditional software development where models are often used for documentation purposes only. However once the code is generated from the model, testing and debugging activities tend to happen on the code level and the model is not updated. We believe that monitoring on the model level could potentially facilitate quality assurance activities as the errors are detected in the early phase of development. In this thesis, we create a Monitoring Configuration for an open source model driven engineering tool called PapyrusRT in Eclipse. We support the run-time monitoring of UML-RT elements with a tracing tool called LTTng. We annotate the model with monitoring information to be used by the code generator for adding tracepoint statements for the corresponding elements. We provide the option of a timing specification to discover latency errors on the model. We validate the results by creating and tracing real time models in PapyrusRT.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper is based on the novel use of a very high fidelity decimation filter chain for Electrocardiogram (ECG) signal acquisition and data conversion. The multiplier-free and multi-stage structure of the proposed filters lower the power dissipation while minimizing the circuit area which are crucial design constraints to the wireless noninvasive wearable health monitoring products due to the scarce operational resources in their electronic implementation. The decimation ratio of the presented filter is 128, working in tandem with a 1-bit 3rd order Sigma Delta (ΣΔ) modulator which achieves 0.04 dB passband ripples and -74 dB stopband attenuation. The work reported here investigates the non-linear phase effects of the proposed decimation filters on the ECG signal by carrying out a comparative study after phase correction. It concludes that the enhanced phase linearity is not crucial for ECG acquisition and data conversion applications since the signal distortion of the acquired signal, due to phase non-linearity, is insignificant for both original and phase compensated filters. To the best of the authors’ knowledge, being free of signal distortion is essential as this might lead to misdiagnosis as stated in the state of the art. This article demonstrates that with their minimal power consumption and minimal signal distortion features, the proposed decimation filters can effectively be employed in biosignal data processing units.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Advertising investment and audience figures indicate that television continues to lead as a mass advertising medium. However, its effectiveness is questioned due to problems such as zapping, saturation and audience fragmentation. This has favoured the development of non-conventional advertising formats. This study provides empirical evidence for the theoretical development. This investigation analyzes the recall generated by four non-conventional advertising formats in a real environment: short programme (branded content), television sponsorship, internal and external telepromotion versus the more conventional spot. The methodology employed has integrated secondary data with primary data from computer assisted telephone interviewing (CATI) were performed ad-hoc on a sample of 2000 individuals, aged 16 to 65, representative of the total television audience. Our findings show that non-conventional advertising formats are more effective at a cognitive level, as they generate higher levels of both unaided and aided recall, in all analyzed formats when compared to the spot.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The substantive legislation on which Agricultural Processing Companies is based has some notable gaps with regard to the pertinent accounting system. There are grey areas concerning compulsory accounting records and their legalization, together with the process for drawing up, checking, approving and depositing the annual accounts.Consequently, in this paper, we will look first at the corporate and accounting records for Agricultural Processing Companies, putting forward proposals in the wake of recent legislation on the legalization of generally applied corporate and accounting documents.A critical analysis will also be made of the entire process of drafting, auditing, approving and depositing the annual accounts and other documents that Agricultural Processing Companies must send each year to their respective regional registries. Legal and mercantile registries will be differentiated from administrative ones and, in this last sense, changes will be suggested with regard to the place and objective of the deposit of such documents.After thirty-four years old, the substantive legislation in economic and accounting matters of the SAT is out of step with the current law, so a review is necessary. Recent regional regulations have not been a real breakthrough in this regard. We assert the existence of a gap between the substantive rules of the SAT and general accounting rules on financial statements, which is unsustainable and it needs a quick legislative action to be canceled.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

There has been an increasing interest in the development of new methods using Pareto optimality to deal with multi-objective criteria (for example, accuracy and time complexity). Once one has developed an approach to a problem of interest, the problem is then how to compare it with the state of art. In machine learning, algorithms are typically evaluated by comparing their performance on different data sets by means of statistical tests. Standard tests used for this purpose are able to consider jointly neither performance measures nor multiple competitors at once. The aim of this paper is to resolve these issues by developing statistical procedures that are able to account for multiple competing measures at the same time and to compare multiple algorithms altogether. In particular, we develop two tests: a frequentist procedure based on the generalized likelihood-ratio test and a Bayesian procedure based on a multinomial-Dirichlet conjugate model. We further extend them by discovering conditional independences among measures to reduce the number of parameters of such models, as usually the number of studied cases is very reduced in such comparisons. Data from a comparison among general purpose classifiers is used to show a practical application of our tests.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

General-purpose parallel processing for solving day-to-day industrial problems has been slow to develop, partly because of the lack of suitable hardware from well-established, mainstream computer manufacturers and suitably parallelized application software. The parallelization of a CFD-(computational fluid dynamics) flow solution code is known as ESAUNA. This code is part of SAUNA, a large CFD suite aimed at computing the flow around very complex aircraft configurations including complete aircraft. A novel feature of the SAUNA suite is that it is designed to use either block-structured hexahedral grids, unstructured tetrahedral grids, or a hybrid combination of both grid types. ESAUNA is designed to solve the Euler equations or the Navier-Stokes equations, the latter in conjunction with various turbulence models. Two fundamental parallelization concepts are used—namely, grid partitioning and encapsulation of communications. Grid partitioning is applied to both block-structured grid modules and unstructured grid modules. ESAUNA can also be coupled with other simulation codes for multidisciplinary computations such as flow simulations around an aircraft coupled with flutter prediction for transient flight simulations.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Résumé : Une définition opérationnelle de la dyslexie qui est adéquate et pertinente à l'éducation n'a pu être identifiée suite à une recension des écrits. Les études sur la dyslexie se retrouvent principalement dans trois champs: la neurologie, la neurolinguistique et la génétique. Les résultats de ces recherches cependant, se limitent au domaine médical et ont peu d'utilité pour une enseignante ou un enseignant. La classification de la dyslexie de surface et la dyslexie profonde est la plus appropriée lorsque la dyslexie est définie comme trouble de lecture dans le contexte de l'éducation. L'objectif de ce mémoire était de développer un cadre conceptuel théorique dans lequel les troubles de lecture chez les enfants dyslexiques sont dû à une difficulté en résolution de problèmes dans le traitement de l'information. La validation du cadre conceptuel a été exécutée à l'aide d'un expert en psychologie cognitive, un expert en dyslexie et une enseignante. La perspective de la résolution de problèmes provient du traitement de l'information en psychologie cognitive. Le cadre conceptuel s'adresse uniquement aux troubles de lectures qui sont manifestés par les enfants dyslexiques.||Abstract : An extensive literature review failed to uncover an adequate operational definition of dyslexia applicable to education. The predominant fields of research that have produced most of the studies on dyslexia are neurology, neurolinguistics and genetics. Their perspectives were shown to be more pertinent to medical experts than to teachers. The categorization of surface and deep dyslexia was shown to be the best description of dyslexia in an educational context. The purpose of the present thesis was to develop a theoretical conceptual framework which describes a link between dyslexia, a text-processing model and problem solving. This conceptual framework was validated by three experts specializing in a specific field (either cognitive psychology, dyslexia or teaching). The concept of problem solving was based on information-processing theories in cognitive psychology. This framework applies specifically to reading difficulties which are manifested by dyslexic children.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Processors with large numbers of cores are becoming commonplace. In order to utilise the available resources in such systems, the programming paradigm has to move towards increased parallelism. However, increased parallelism does not necessarily lead to better performance. Parallel programming models have to provide not only flexible ways of defining parallel tasks, but also efficient methods to manage the created tasks. Moreover, in a general-purpose system, applications residing in the system compete for the shared resources. Thread and task scheduling in such a multiprogrammed multithreaded environment is a significant challenge. In this thesis, we introduce a new task-based parallel reduction model, called the Glasgow Parallel Reduction Machine (GPRM). Our main objective is to provide high performance while maintaining ease of programming. GPRM supports native parallelism; it provides a modular way of expressing parallel tasks and the communication patterns between them. Compiling a GPRM program results in an Intermediate Representation (IR) containing useful information about tasks, their dependencies, as well as the initial mapping information. This compile-time information helps reduce the overhead of runtime task scheduling and is key to high performance. Generally speaking, the granularity and the number of tasks are major factors in achieving high performance. These factors are even more important in the case of GPRM, as it is highly dependent on tasks, rather than threads. We use three basic benchmarks to provide a detailed comparison of GPRM with Intel OpenMP, Cilk Plus, and Threading Building Blocks (TBB) on the Intel Xeon Phi, and with GNU OpenMP on the Tilera TILEPro64. GPRM shows superior performance in almost all cases, only by controlling the number of tasks. GPRM also provides a low-overhead mechanism, called “Global Sharing”, which improves performance in multiprogramming situations. We use OpenMP, as the most popular model for shared-memory parallel programming as the main GPRM competitor for solving three well-known problems on both platforms: LU factorisation of Sparse Matrices, Image Convolution, and Linked List Processing. We focus on proposing solutions that best fit into the GPRM’s model of execution. GPRM outperforms OpenMP in all cases on the TILEPro64. On the Xeon Phi, our solution for the LU Factorisation results in notable performance improvement for sparse matrices with large numbers of small blocks. We investigate the overhead of GPRM’s task creation and distribution for very short computations using the Image Convolution benchmark. We show that this overhead can be mitigated by combining smaller tasks into larger ones. As a result, GPRM can outperform OpenMP for convolving large 2D matrices on the Xeon Phi. Finally, we demonstrate that our parallel worksharing construct provides an efficient solution for Linked List processing and performs better than OpenMP implementations on the Xeon Phi. The results are very promising, as they verify that our parallel programming framework for manycore processors is flexible and scalable, and can provide high performance without sacrificing productivity.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In Mediterranean countries, such as Portugal, traditional dry-fermented sausages are highly appreciated. They are often still being manufactured in small processing units, according to traditional procedures. The aims of the present study were to evaluate the effect of different starter cultures and their optimal concentration, to reduce the microbial load in end-products, with the purpose to improve the sausages’ safety, without deteriorating sensory acceptability. pH, aw, colour, texture and microbiological profile were assessed. On the other hand, a sensory panel evaluated the products. Based on the first results, S. xylosus and L. sakei were chosen to be inoculated together with a yeast strain. In the mixed starter culture experiment, a food safety issue arose probably related to the higher aw value (0.91). The presence of Salmonella spp. detected in a few end-products sausages did not allow a full sensory evaluation in the mixed starter culture experiment. However, in the two preliminary experiments, the use of starter cultures did not depreciate the panellists’ overall appreciation and products acceptability.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Esta dissertação apresenta uma arquitectura interoperável que permite lidar com a obtenção, manipulação, processamento e análise de informação geográfica. A aplicação 30, implementada como parte da arquitectura, para além de permitir a visualização e manipulação de dados dentro de um ambiente 30, oferece métodos que permitem descobrir, aceder e usar geo-processos, disponíveis através de serviços Web. A interacção com o utilizador é também feita através uma abordagem que quebra a típica complexidade que a maioria dos Sistemas de Informação Geográfica apresenta. O recurso à programação visual reduz a complexidade do sistema, e permite aos operadores tirar proveito da localização e de uma abstracção de um processo complexo, onde as unidades de processamento são representadas no terreno através de componentes 30 que podem ser directamente manipuladas e ligadas de modo a criar encandeamentos complexos de processos. Estes processos podem também ser criados visualmente e disponibilizados online. ABSTRACT; This thesis presents an interoperable architecture mainly designed for manipulation, processing and geographical information analysis. The three-dimensional interface, implemented as part of the architecture, besides allowing the visualization and manipulation of spatial data within a 30 environment, offers methods for discovering, accessing and using geo-processes, available through Web Services. Furthermore, the user interaction is done through an approach that breaks the typical complexity of most Geographic information Systems. This simplicity is in general archived through a visual programming approach that allows operators to take advantage of location, and use processes through abstract representations. Thus, processing units are represented on the terrain through 30 components, which can be directly manipulated and linked to create complex process chains. New processes can also be visually created and deployed online.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We explore the recently developed snapshot-based dynamic mode decomposition (DMD) technique, a matrix-free Arnoldi type method, to predict 3D linear global flow instabilities. We apply the DMD technique to flows confined in an L-shaped cavity and compare the resulting modes to their counterparts issued from classic, matrix forming, linear instability analysis (i.e. BiGlobal approach) and direct numerical simulations. Results show that the DMD technique, which uses snapshots generated by a 3D non-linear incompressible discontinuous Galerkin Navier?Stokes solver, provides very similar results to classical linear instability analysis techniques. In addition, we compare DMD results issued from non-linear and linearised Navier?Stokes solvers, showing that linearisation is not necessary (i.e. base flow not required) to obtain linear modes, as long as the analysis is restricted to the exponential growth regime, that is, flow regime governed by the linearised Navier?Stokes equations, and showing the potential of this type of analysis based on snapshots to general purpose CFD codes, without need of modifications. Finally, this work shows that the DMD technique can provide three-dimensional direct and adjoint modes through snapshots provided by the linearised and adjoint linearised Navier?Stokes equations advanced in time. Subsequently, these modes are used to provide structural sensitivity maps and sensitivity to base flow modification information for 3D flows and complex geometries, at an affordable computational cost. The information provided by the sensitivity study is used to modify the L-shaped geometry and control the most unstable 3D mode.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

It is well known that self-generated stimuli are processed differently from externally generated stimuli. For example, many people have noticed since childhood that it is very difficult to make a self-tickling. In the auditory domain, self-generated sounds elicit smaller brain responses as compared to externally generated sounds, known as the sensory attenuation (SA) effect. SA is manifested in reduced amplitudes of evoked responses as measured through MEEG, decreased firing rates of neurons and a lower level of perceived loudness for self-generated sounds. The predominant explanation for SA is based on the idea that self-generated stimuli are predicted (e.g., the forward model account). It is the nature of their predictability that is crucial for SA. On the contrary, the sensory gating account emphasizes a general suppressive effect of actions on sensory processing, regardless of the predictability of the stimuli. Both accounts have received empirical support, which suggests that both mechanisms may exist. In chapter 2, three behavioural studies concerning the influence of motor activation on auditory perception were presented. Study 1 compared the effect of SA and attention in an auditory detection task and showed that SA was present even when substantial attention was paid to unpredictable stimuli. Study 2 compared the loudness perception of tones generated by others between Chinese and British participants. Compared to externally generated tones, a decrease in perceived loudness for others generated tones was found among Chinese but not among the British. In study 3, partial evidence was found that even when reading words that are related to action, auditory detection performance was impaired. In chapter 3, the classic SA effect of M100 suppression was replicated with MEG in study 4. With time-frequency analysis, a potential neural information processing sequence was found in auditory cortex. Prior to the onset of self-generated tones, there was an increase of oscillatory power in the alpha band. After the stimulus onset, reduced gamma power and alpha/beta phase locking were found. The three temporally segregated oscillatory events correlated with each other and with SA effect, which may be the underlying neural implementation of SA. In chapter 4, a TMS-MEG study was presented investigating the role of the cerebellum in adapting to delayed presentation of self-generated tones (study 5). It demonstrated that in sham stimulation condition, the brain can adapt to the delay (about 100 ms) within 300 trials of learning by showing a significant increase of SA effect in the suppression of M100, but not M200 component. Whereas after stimulating the cerebellum with a suppressive TMS protocol, the adaptation in M100 suppression disappeared and the pattern of M200 suppression reversed to M200 enhancement. These data support the idea that the suppressive effect of actions on auditory processing is a consequence of both motor driven sensory predictions and general sensory gating. The results also demonstrate the importance of neural oscillations in implementing SA effect and the critical role of the cerebellum in learning sensory predictions under sensory perturbation.