963 resultados para GPU acceleration
Resumo:
Abstract: Near-infrared spectroscopy (NIRS) enables the non-invasive measurement of changes in hemodynamics and oxygenation in tissue. Changes in light-coupling due to movement of the subject can cause movement artifacts (MAs) in the recorded signals. Several methods have been developed so far that facilitate the detection and reduction of MAs in the data. However, due to fixed parameter values (e.g., global threshold) none of these methods are perfectly suitable for long-term (i.e., hours) recordings or were not time-effective when applied to large datasets. We aimed to overcome these limitations by automation, i.e., data adaptive thresholding specifically designed for long-term measurements, and by introducing a stable long-term signal reconstruction. Our new technique (“acceleration-based movement artifact reduction algorithm”, AMARA) is based on combining two methods: the “movement artifact reduction algorithm” (MARA, Scholkmann et al. Phys. Meas. 2010, 31, 649–662), and the “accelerometer-based motion artifact removal” (ABAMAR, Virtanen et al. J. Biomed. Opt. 2011, 16, 087005). We describe AMARA in detail and report about successful validation of the algorithm using empirical NIRS data, measured over the prefrontal cortex in adolescents during sleep. In addition, we compared the performance of AMARA to that of MARA and ABAMAR based on validation data.
Resumo:
The Financial Accounting Standards Board (FASB) mandated the expensing of stock options with FAS 123 (R). As of March 2006, 749 companies had accelerated the vesting of their employee stock options and avoided a reduction in their reported profits that otherwise would have occurred under the new standard. There are many different motives for the acceleration strategy, and the focus of this study is to determine whether shareholders viewed these motives as either positive or negative. A favorable return subsequent to an acceleration announcement would signify that shareholder's viewed management's motives as positive. An unfavorable return subsequent to an acceleration announcement would signify that shareholder's viewed management's motives as negative. The evidence from this study suggests that shareholders reacted favorably, on average, to acceleration announcements. However, these results lack statistical significance and are based on a small sample, thus, they should be interpreted with caution.
Resumo:
Obesity is postulated to be one of the major risk factors for pancreatic cancer, and recently it was indicated that an elevated body mass index (BMI correlates strongly with a decrease in patient survival. Despite the evident relationship, the molecular mechanisms involved are unclear. Oncogenic mutation of K-Ras is found early and is universal in pancreatic cancer. Extensive evidence indicates oncogenic K-Ras is not entirely active and it requires a triggering event to surpass the activity of Ras beyond the threshold necessary for a Ras-inflammation feed-forward loop. We hypothesize that high fat intake induces a persistent low level inflammatory response triggering increased K-Ras activity and that Cox-2 is essential for this inflammatory reaction. To determine this, LSL-K-Ras mice were crossed with Ela-CreER (Acinar-specific) or Pdx-1-Cre (Pancreas-specific) to “knock-in” oncogenic K-Ras. Additionally, these animals were crossed with Cox-2 conditional knockout mice to access the importance of Cox-2 in the inflammatory loop present. The mice were fed isocaloric diets containing 60% energy or 10% energy from fat. We found that a high fat diet increased K-Ras activity, PanIN formation, and fibrotic stroma significantly compared to a control diet. Genetic deletion of Cox-2 prevented high fat diet induced fibrosis and PanIN formation in oncogenic K-Ras expressing mice. Additionally, long term consumption of high fat diet, increased the progression of PanIN lesions leading to invasive cancer and decreased overall survival rate. These findings indicate that a high fat diet can stimulate the activation of oncogenic K-Ras and initiate an inflammatory feed forward loop requiring Cox-2 leading to inflammation, fibrosis, and PanINs. This mechanism could explain the relationship between a high fat diet and elevated risk for pancreatic cancer.
Resumo:
Within the last decade, the Greenland ice sheet (GrIS) and its surroundings have experienced record high surface temperatures (Mote, 2007, doi:10.1029/2007GL031976; Box et al., 2010), ice sheet melt extent (Fettweis et al., 2011, doi:10.5194/tc-5-359-2011) and record-low summer sea-ice extent (Nghiem et al., 2007, doi:10.1029/2007GL031138). Using three independent data sets, we derive, for the first time, consistent ice-mass trends and temporal variations within seven major drainage basins from gravity fields from the Gravity Recovery and Climate Experiment (GRACE; Tapley et al., 2004, doi:10.1029/2004GL019920), surface-ice velocities from Inteferometric Synthetic Aperture Radar (InSAR; Rignot and Kanagaratnam, 2006, doi:10.1126/science.1121381) together with output of the regional atmospheric climate modelling (RACMO2/ GR; Ettema et al., 2009, doi:10.1029/2009GL038110), and surface-elevation changes from the Ice, cloud and land elevation satellite (ICESat; Sorensen et al., 2011, doi:10.5194/tc-5-173-2011). We show that changing ice discharge (D), surface melting and subsequent run-off (M/R) and precipitation (P) all contribute, in a complex and regionally variable interplay, to the increasingly negative mass balance of the GrIS observed within the last decade. Interannual variability in P along the northwest and west coasts of the GrIS largely explains the apparent regional mass loss increase during 2002-2010, and obscures increasing M/R and D since the 1990s. In winter 2002/2003 and 2008/2009, accumulation anomalies in the east and southeast temporarily outweighed the losses by M/R and D that prevailed during 2003-2008, and after summer 2010. Overall, for all basins of the GrIS, the decadal variability of anomalies in P, M/R and D between 1958 and 2010 (w.r.t. 1961-1990) was significantly exceeded by the regional trends observed during the GRACE period (2002-2011).
Resumo:
In this paper, we describe a complete development platform that features different innovative acceleration strategies, not included in any other current platform, that simplify and speed up the definition of the different elements required to design a spoken dialog service. The proposed accelerations are mainly based on using the information from the backend database schema and contents, as well as cumulative information produced throughout the different steps in the design. Thanks to these accelerations, the interaction between the designer and the platform is improved, and in most cases the design is reduced to simple confirmations of the “proposals” that the platform dynamically provides at each step. In addition, the platform provides several other accelerations such as configurable templates that can be used to define the different tasks in the service or the dialogs to obtain or show information to the user, automatic proposals for the best way to request slot contents from the user (i.e. using mixed-initiative forms or directed forms), an assistant that offers the set of more probable actions required to complete the definition of the different tasks in the application, or another assistant for solving specific modality details such as confirmations of user answers or how to present them the lists of retrieved results after querying the backend database. Additionally, the platform also allows the creation of speech grammars and prompts, database access functions, and the possibility of using mixed initiative and over-answering dialogs. In the paper we also describe in detail each assistant in the platform, emphasizing the different kind of methodologies followed to facilitate the design process at each one. Finally, we describe the results obtained in both a subjective and an objective evaluation with different designers that confirm the viability, usefulness, and functionality of the proposed accelerations. Thanks to the accelerations, the design time is reduced in more than 56% and the number of keystrokes by 84%.
Resumo:
This paper outlines the problems found in the parallelization of SPH (Smoothed Particle Hydrodynamics) algorithms using Graphics Processing Units. Different results of some parallel GPU implementations in terms of the speed-up and the scalability compared to the CPU sequential codes are shown. The most problematic stage in the GPU-SPH algorithms is the one responsible for locating neighboring particles and building the vectors where this information is stored, since these specific algorithms raise many dificulties for a data-level parallelization. Because of the fact that the neighbor location using linked lists does not show enough data-level parallelism, two new approaches have been pro- posed to minimize bank conflicts in the writing and subsequent reading of the neighbor lists. The first strategy proposes an efficient coordination between CPU-GPU, using GPU algorithms for those stages that allow a straight forward parallelization, and sequential CPU algorithms for those instructions that involve some kind of vector reduction. This coordination provides a relatively orderly reading of the neighbor lists in the interactions stage, achieving a speed-up factor of x47 in this stage. However, since the construction of the neighbor lists is quite expensive, it is achieved an overall speed-up of x41. The second strategy seeks to maximize the use of the GPU in the neighbor's location process by executing a specific vector sorting algorithm that allows some data-level parallelism. Al- though this strategy has succeeded in improving the speed-up on the stage of neighboring location, the global speed-up on the interactions stage falls, due to inefficient reading of the neighbor vectors. Some changes to these strategies are proposed, aimed at maximizing the computational load of the GPU and using the GPU texture-units, in order to reach the maximum speed-up for such codes. Different practical applications have been added to the mentioned GPU codes. First, the classical dam-break problem is studied. Second, the wave impact of the sloshing fluid contained in LNG vessel tanks is also simulated as a practical example of particle methods
Resumo:
The simulation of interest rate derivatives is a powerful tool to face the current market fluctuations. However, the complexity of the financial models and the way they are processed require exorbitant computation times, what is in clear conflict with the need of a processing time as short as possible to operate in the financial market. To shorten the computation time of financial derivatives the use of hardware accelerators becomes a must.
Resumo:
An analytical solution of the two body problem perturbed by a constant tangential acceleration is derived with the aid of perturbation theory. The solution, which is valid for circular and elliptic orbits with generic eccentricity, describes the instantaneous time variation of all orbital elements. A comparison with high-accuracy numerical results shows that the analytical method can be effectively applied to multiple-revolution low-thrust orbit transfer around planets and in interplanetary space with negligible error.
Resumo:
In laser-plasma experiments, we observed that ion acceleration from the Coulomb explosion of the plasma channel bored by the laser, is prevented when multiple plasma instabilities such as filamentation and hosing, and nonlinear coherent structures (vortices/post-solitons) appear in the wake of an ultrashort laser pulse. The tailoring of the longitudinal plasma density ramp allows us to control the onset of these insabilities. We deduced that the laser pulse is depleted into these structures in our conditions, when a plasma at about 10% of the critical density exhibits a gradient on the order of 250 {\mu}m (gaussian fit), thus hindering the acceleration. A promising experimental setup with a long pulse is demonstrated enabling the excitation of an isolated coherent structure for polarimetric measurements and, in further perspectives, parametric studies of ion plasma acceleration efficiency.
Resumo:
Este Proyecto Fin de Carrera (PFC) tiene como objetivo el análisis, diseño e implementación de un videojuego móvil multijugador, con un enfoque educativo, para la sensibilización sobre el Índice de Desarrollo Humano (IDH). El sistema resultante se ha desarrollado para la Plataforma Android, utilizando el Framework AndEngine, que utiliza aceleración hardware de la GPU para garantizar un buen rendimiento en terminales de gama baja, de modo que pueda utilizarse en un amplio número de terminales móviles disponibles en el mercado. La aplicación se presenta como un juego de cartas con los diferentes países y sus datos humanitarios, los jugadores deben conocer el peso de los índices de desarrollo (esperanza de vida, renta, educación) de los países en comparación con los países de los otros jugadores. El sistema de juego premia a los jugadores con mayores conocimientos sobre los datos humanos de los diferentes países del mundo, de ese modo los mejores jugadores serán los que tengan más conocimientos de estos datos. El juego permite jugar partidas en solitario utilizando jugadores manejados por la CPU, o multijugador mediante WIFI o 3G. La actualización de la información y de los datos de las partidas se realiza a través de la comunicación con un servidor web ya implementado de forma complementaria a la realización de este proyecto. El sistema ha sido integrado y validado satisfactoriamente con diferentes terminales móviles y usuarios de diferente perfil de edad y uso. El videojuego se puede descargar de la página web creada en un proyecto complementario a éste (pendiente de publicación web), y ya se encuentra también disponible en Google Play. https://play.google.com/store/apps/details?id=xnetcom.pro.cartas&hl=es_419 ABSTRACT. This Project End of Career (PFC) takes as an aim the analysis, design and implementation of a multiplayer mobile videogame, with an educational approach, for the awareness on the Human Development Index (HDI). The resultant system has been developed for the Platform Android, using the AndEngine Framework, which uses hardware acceleration of the GPU to ensure a good performance on low-end terminals, so that it can be used in a wide range of mobile handsets available in the market. The application is presented as a card game with the different countries and his humanitarian information, the players must know the weight of the indexes of development (life expectancy, revenue, education) of the countries in comparison with the countries of other players. The game system rewards players with more knowledge on human information of different countries, thus the best players will be those with more knowledge of these information. The game allows to play items in solitarily using players handled by the CPU, or multiplayer by means of WIFI or 3G. The update of the information and data of the online games is done through communication with a web server implemented as a complement to the realization of this project. The system has been built and successfully validated with different mobile terminals and users of different age and usage profile. The game can be downloaded from the website created in a complementary project to this (web publication pending), and is now also available on Google Play https://play.google.com/store/apps/details?id=xnetcom.pro.cartas&hl=es_419
Resumo:
En el presente documento se hablará acerca del desarrollo de un proyecto para la mejora de un programa de análisis de señales; con ese fin, se hará uso de técnicas de optimización del software y de tecnologías de aceleración, mediante el aprovechamiento del paralelismo del programa. Además se hará un análisis de acerca del uso de dos tecnologías basadas en diferentes paradigmas de programación paralela; una mediante múltiples hilos con memoria compartida y la otra mediante el uso de GPUs como dispositivos de coprocesamiento. This paper will talk about the development of a Project to improve a program that does signals analysis; to that end, it will make use of software optimization techniques and acceleration technologies by exploiting parallelism in the program. In Addition will be done an analysis on the use of two technologies based on two different paradigms; one using multiple threads with shared memory and the other using GPU as co-processing devices.