774 resultados para GPGPU Parallel Computing
Resumo:
Remote sensing spatial, spectral, and temporal resolutions of images, acquired over a reasonably sized image extent, result in imagery that can be processed to represent land cover over large areas with an amount of spatial detail that is very attractive for monitoring, management, and scienti c activities. With Moore's Law alive and well, more and more parallelism is introduced into all computing platforms, at all levels of integration and programming to achieve higher performance and energy e ciency. Being the geometric calibration process one of the most time consuming processes when using remote sensing images, the aim of this work is to accelerate this process by taking advantage of new computing architectures and technologies, specially focusing in exploiting computation over shared memory multi-threading hardware. A parallel implementation of the most time consuming process in the remote sensing geometric correction has been implemented using OpenMP directives. This work compares the performance of the original serial binary versus the parallelized implementation, using several multi-threaded modern CPU architectures, discussing about the approach to nd the optimum hardware for a cost-e ective execution.
Resumo:
The subdivisions of human inferior colliculus are currently based on Golgi and Nissl-stained preparations. We have investigated the distribution of calcium-binding protein immunoreactivity in the human inferior colliculus and found complementary or mutually exclusive localisations of parvalbumin versus calbindin D-28k and calretinin staining. The central nucleus of the inferior colliculus but not the surrounding regions contained parvalbumin-positive neuronal somata and fibres. Calbindin-positive neurons and fibres were concentrated in the dorsal aspect of the central nucleus and in structures surrounding it: the dorsal cortex, the lateral lemniscus, the ventrolateral nucleus, and the intercollicular region. In the dorsal cortex, labelling of calbindin and calretinin revealed four distinct layers.Thus, calcium-binding protein reactivity reveals in the human inferior colliculus distinct neuronal populations that are anatomically segregated. The different calcium-binding protein-defined subdivisions may belong to parallel auditory pathways that were previously demonstrated in non-human primates, and they may constitute a first indication of parallel processing in human subcortical auditory structures.
Resumo:
Colour pattern diversity can be due to random processes or to natural or sexual selection. Consequently, similarities in colour patterns are not always correlated with common ancestry, but may result from convergent evolution under shared selection pressures or drift. Neolamprologus brichardi and Neolamprologus pulcher have been described as two distinct species based on differences in the arrangement of two dark bars on the operculum. Our study uses DNA sequences of the mitochondrial control region to show that relatedness of haplotypes disagrees with species assignment based on head colour pattern. This suggests repeated parallel evolution of particular stripe patterns. The complete lack of shared haplotypes between populations of the same or different phenotypes reflects strong philopatric behaviour, possibly induced by the cooperative breeding mode in which offspring remain in their natal territory and serve as helpers until they disperse to nearby territories or take over a breeding position. Concordant phylogeographic patterns between N. brichardi/N. pulcher populations and other rock-dwelling cichlids suggest that the same colonization routes have been taken by sympatric species and that these routes were affected by lake level fluctuations in the past.
Resumo:
Emerging evidence indicates that angiogenesis and immunosuppression frequently occur simultaneously in response to diverse stimuli. Here, we describe a fundamental biological programme that involves the activation of both angiogenesis and immunosuppressive responses, often through the same cell types or soluble factors. We suggest that the initiation of these responses is part of a physiological and homeostatic tissue repair programme, which can be co-opted in pathological states, notably by tumours. This view can help to devise new cancer therapies and may have implications for aseptic tissue injury, pathogen-mediated tissue destruction, chronic inflammation and even reproduction.
Resumo:
Traffic noise monitoring using FHWA's Demonstration Projects Division Mobile Noise Laboratory at free field, single wall and parallel barrier site on I-380 in Evansdale, Iowa is described. Access to I-380 prior to its being open to traffic afforded a controlled pass-by monitoring phase involving different vehicle types. A subsequent second phase entailed identical measurement methodology to monitor "real world" I-380 traffic noise. Phase I data indicated increases in noise were significant under the parallel barrier conditions for light duty vehicles operating in the far lane. Phase II results showed that the actual I-380 traffic mix largely offset the earlier observed effect, but minor increases in traffic noise under the parallel system were noted. These differences in noise barrier system effectiveness are judged to be insignificant at this particular study location.
Resumo:
The differentiation of CD4(+) or CD8(+) T cells following priming of naive cells is central in the establishment of the immune response against pathogens or tumors. However, our understanding of this complex process and the significance of the multiple subsets of differentiation remains controversial. Gene expression profiling has opened new directions of investigation in immunobiology. Nonetheless, the need for substantial amount of biological material often limits its application range. In this study, we have developed procedures to perform microarray analysis on amplified cDNA from low numbers of cells, including primary T lymphocytes, and applied this technology to the study of CD4 and CD8 lineage differentiation. Gene expression profiling was performed on samples of 1000 cells from 10 different subpopulations, defining the major stages of post-thymic CD4(+) or CD8(+) T cell differentiation. Surprisingly, our data revealed that while CD4(+) and CD8(+) T cell gene expression programs diverge at early stages of differentiation, they become increasingly similar as cells reach a late differentiation stage. This suggests that functional heterogeneity between Ag experienced CD4(+) and CD8(+) T cells is more likely to be located early during post-thymic differentiation, and that late stages of differentiation may represent a common end in the development of T-lymphocytes.
Resumo:
The motivation for this research initiated from the abrupt rise and fall of minicomputers which were initially used both for industrial automation and business applications due to their significantly lower cost than their predecessors, the mainframes. Later industrial automation developed its own vertically integrated hardware and software to address the application needs of uninterrupted operations, real-time control and resilience to harsh environmental conditions. This has led to the creation of an independent industry, namely industrial automation used in PLC, DCS, SCADA and robot control systems. This industry employs today over 200'000 people in a profitable slow clockspeed context in contrast to the two mainstream computing industries of information technology (IT) focused on business applications and telecommunications focused on communications networks and hand-held devices. Already in 1990s it was foreseen that IT and communication would merge into one Information and communication industry (ICT). The fundamental question of the thesis is: Could industrial automation leverage a common technology platform with the newly formed ICT industry? Computer systems dominated by complex instruction set computers (CISC) were challenged during 1990s with higher performance reduced instruction set computers (RISC). RISC started to evolve parallel to the constant advancement of Moore's law. These developments created the high performance and low energy consumption System-on-Chip architecture (SoC). Unlike to the CISC processors RISC processor architecture is a separate industry from the RISC chip manufacturing industry. It also has several hardware independent software platforms consisting of integrated operating system, development environment, user interface and application market which enables customers to have more choices due to hardware independent real time capable software applications. An architecture disruption merged and the smartphone and tablet market were formed with new rules and new key players in the ICT industry. Today there are more RISC computer systems running Linux (or other Unix variants) than any other computer system. The astonishing rise of SoC based technologies and related software platforms in smartphones created in unit terms the largest installed base ever seen in the history of computers and is now being further extended by tablets. An underlying additional element of this transition is the increasing role of open source technologies both in software and hardware. This has driven the microprocessor based personal computer industry with few dominating closed operating system platforms into a steep decline. A significant factor in this process has been the separation of processor architecture and processor chip production and operating systems and application development platforms merger into integrated software platforms with proprietary application markets. Furthermore the pay-by-click marketing has changed the way applications development is compensated: Three essays on major trends in a slow clockspeed industry: The case of industrial automation 2014 freeware, ad based or licensed - all at a lower price and used by a wider customer base than ever before. Moreover, the concept of software maintenance contract is very remote in the app world. However, as a slow clockspeed industry, industrial automation has remained intact during the disruptions based on SoC and related software platforms in the ICT industries. Industrial automation incumbents continue to supply systems based on vertically integrated systems consisting of proprietary software and proprietary mainly microprocessor based hardware. They enjoy admirable profitability levels on a very narrow customer base due to strong technology-enabled customer lock-in and customers' high risk leverage as their production is dependent on fault-free operation of the industrial automation systems. When will this balance of power be disrupted? The thesis suggests how industrial automation could join the mainstream ICT industry and create an information, communication and automation (ICAT) industry. Lately the Internet of Things (loT) and weightless networks, a new standard leveraging frequency channels earlier occupied by TV broadcasting, have gradually started to change the rigid world of Machine to Machine (M2M) interaction. It is foreseeable that enough momentum will be created that the industrial automation market will in due course face an architecture disruption empowered by these new trends. This thesis examines the current state of industrial automation subject to the competition between the incumbents firstly through a research on cost competitiveness efforts in captive outsourcing of engineering, research and development and secondly researching process re- engineering in the case of complex system global software support. Thirdly we investigate the industry actors', namely customers, incumbents and newcomers, views on the future direction of industrial automation and conclude with our assessments of the possible routes industrial automation could advance taking into account the looming rise of the Internet of Things (loT) and weightless networks. Industrial automation is an industry dominated by a handful of global players each of them focusing on maintaining their own proprietary solutions. The rise of de facto standards like IBM PC, Unix and Linux and SoC leveraged by IBM, Compaq, Dell, HP, ARM, Apple, Google, Samsung and others have created new markets of personal computers, smartphone and tablets and will eventually also impact industrial automation through game changing commoditization and related control point and business model changes. This trend will inevitably continue, but the transition to a commoditized industrial automation will not happen in the near future.
Resumo:
Activity monitors based on accelerometry are used to predict the speed and energy cost of walking at 0% slope, but not at other inclinations. Parallel measurements of body accelerations and altitude variation were studied to determine whether walking speed prediction could be improved. Fourteen subjects walked twice along a 1.3 km circuit with substantial slope variations (-17% to +17%). The parameters recorded were body acceleration using a uni-axial accelerometer, altitude variation using differential barometry, and walking speed using satellite positioning (DGPS). Linear regressions were calculated between acceleration and walking speed, and between acceleration/altitude and walking speed. These predictive models, calculated using the data from the first circuit run, were used to predict speed during the second circuit. Finally the predicted velocity was compared with the measured one. The result was that acceleration alone failed to predict speed (mean r = 0.4). Adding altitude variation improved the prediction (mean r = 0.7). With regard to the altitude/acceleration-speed relationship, substantial inter-individual variation was found. It is concluded that accelerometry, combined with altitude measurement, can assess position variations of humans provided inter-individual variation is taken into account. It is also confirmed that DGPS can be used for outdoor walking speed measurements, opening up new perspectives in the field of biomechanics.
Resumo:
The M-Coffee server is a web server that makes it possible to compute multiple sequence alignments (MSAs) by running several MSA methods and combining their output into one single model. This allows the user to simultaneously run all his methods of choice without having to arbitrarily choose one of them. The MSA is delivered along with a local estimation of its consistency with the individual MSAs it was derived from. The computation of the consensus multiple alignment is carried out using a special mode of the T-Coffee package [Notredame, Higgins and Heringa (T-Coffee: a novel method for fast and accurate multiple sequence alignment. J. Mol. Biol. 2000; 302: 205-217); Wallace, O'Sullivan, Higgins and Notredame (M-Coffee: combining multiple sequence alignment methods with T-Coffee. Nucleic Acids Res. 2006; 34: 1692-1699)] Given a set of sequences (DNA or proteins) in FASTA format, M-Coffee delivers a multiple alignment in the most common formats. M-Coffee is a freeware open source package distributed under a GPL license and it is available either as a standalone package or as a web service from www.tcoffee.org.
Resumo:
We consider the numerical treatment of the optical flow problem by evaluating the performance of the trust region method versus the line search method. To the best of our knowledge, the trust region method is studied here for the first time for variational optical flow computation. Four different optical flow models are used to test the performance of the proposed algorithm combining linear and nonlinear data terms with quadratic and TV regularization. We show that trust region often performs better than line search; especially in the presence of non-linearity and non-convexity in the model.
Resumo:
BACKGROUND: Ischemic stroke is the leading cause of mortality worldwide and a major contributor to neurological disability and dementia. Terutroban is a specific TP receptor antagonist with antithrombotic, antivasoconstrictive, and antiatherosclerotic properties, which may be of interest for the secondary prevention of ischemic stroke. This article describes the rationale and design of the Prevention of cerebrovascular and cardiovascular Events of ischemic origin with teRutroban in patients with a history oF ischemic strOke or tRansient ischeMic Attack (PERFORM) Study, which aims to demonstrate the superiority of the efficacy of terutroban versus aspirin in secondary prevention of cerebrovascular and cardiovascular events. METHODS AND RESULTS: The PERFORM Study is a multicenter, randomized, double-blind, parallel-group study being carried out in 802 centers in 46 countries. The study population includes patients aged > or =55 years, having suffered an ischemic stroke (< or =3 months) or a transient ischemic attack (< or =8 days). Participants are randomly allocated to terutroban (30 mg/day) or aspirin (100 mg/day). The primary efficacy endpoint is a composite of ischemic stroke (fatal or nonfatal), myocardial infarction (fatal or nonfatal), or other vascular death (excluding hemorrhagic death of any origin). Safety is being evaluated by assessing hemorrhagic events. Follow-up is expected to last for 2-4 years. Assuming a relative risk reduction of 13%, the expected number of primary events is 2,340. To obtain statistical power of 90%, this requires inclusion of at least 18,000 patients in this event-driven trial. The first patient was randomized in February 2006. CONCLUSIONS: The PERFORM Study will explore the benefits and safety of terutroban in secondary cardiovascular prevention after a cerebral ischemic event.
Resumo:
The goal of this study was to investigate the impact of computing parameters and the location of volumes of interest (VOI) on the calculation of 3D noise power spectrum (NPS) in order to determine an optimal set of computing parameters and propose a robust method for evaluating the noise properties of imaging systems. Noise stationarity in noise volumes acquired with a water phantom on a 128-MDCT and a 320-MDCT scanner were analyzed in the spatial domain in order to define locally stationary VOIs. The influence of the computing parameters in the 3D NPS measurement: the sampling distances bx,y,z and the VOI lengths Lx,y,z, the number of VOIs NVOI and the structured noise were investigated to minimize measurement errors. The effect of the VOI locations on the NPS was also investigated. Results showed that the noise (standard deviation) varies more in the r-direction (phantom radius) than z-direction plane. A 25 × 25 × 40 mm(3) VOI associated with DFOV = 200 mm (Lx,y,z = 64, bx,y = 0.391 mm with 512 × 512 matrix) and a first-order detrending method to reduce structured noise led to an accurate NPS estimation. NPS estimated from off centered small VOIs had a directional dependency contrary to NPS obtained from large VOIs located in the center of the volume or from small VOIs located on a concentric circle. This showed that the VOI size and location play a major role in the determination of NPS when images are not stationary. This study emphasizes the need for consistent measurement methods to assess and compare image quality in CT.