917 resultados para Parallel or distributed processing
Resumo:
Dynorphins are important neuropeptides with a central role in nociception and pain alleviation. Many mechanisms regulate endogenous dynorphin concentrations, including proteolysis. Proprotein convertases (PCs) are widely expressed in the central nervous system and specifically cleave at C-terminal of either a pair of basic amino acids, or a single basic residue. The proteolysis control of endogenous Big Dynorphin (BDyn) and Dynorphin A (Dyn A) levels has a profound impact on pain perception and the role of PCs remain unclear. The objective of this study was to decipher the role of PC1 and PC2 in the proteolysis control of BDyn and Dyn A levels using cellular fractions of spinal cords from wild type (WT), PC1-/+ and PC2-/+ animals and mass spectrometry. Our results clearly demonstrate that both PC1 and PC2 are involved in the proteolysis regulation of BDyn and Dyn A with a more important role for PC1. C-terminal processing of BDyn generates specific peptide fragments Dynorphin 1-19, Dynorphin 1-13, Dynorphin 1-11 and Dynorphin 1-7 and C-terminal processing of Dyn A generates Dynorphin 1-13, Dynorphin 1-11 and Dynorphin 1-7, all these peptide fragments are associated with PC1 or PC2 processing. Moreover, proteolysis of BDyn leads to the formation of Dyn A and Leu-Enk, two important opioid peptides. The rate of formation of both is significantly reduced in cellular fractions of spinal cord mutant mice. As a consequence, even partial inhibition of PC1 or PC2 may impair the endogenous opioid system.
Resumo:
A parallel computing environment to support optimization of large-scale engineering systems is designed and implemented on Windows-based personal computer networks, using the master-worker model and the Parallel Virtual Machine (PVM). It is involved in decomposition of a large engineering system into a number of smaller subsystems optimized in parallel on worker nodes and coordination of subsystem optimization results on the master node. The environment consists of six functional modules, i.e. the master control, the optimization model generator, the optimizer, the data manager, the monitor, and the post processor. Object-oriented design of these modules is presented. The environment supports steps from the generation of optimization models to the solution and the visualization on networks of computers. User-friendly graphical interfaces make it easy to define the problem, and monitor and steer the optimization process. It has been verified by an example of a large space truss optimization. (C) 2004 Elsevier Ltd. All rights reserved.
The mismatch negativity (MMN) response to complex tones and spoken words in individuals with aphasia
Resumo:
Background: The mismatch negativity (MMN) is a fronto-centrally distributed event-related potential (ERP) that is elicited by any discriminable auditory change. It is an ideal neurophysiological tool for measuring the auditory processing skills of individuals with aphasia because it can be elicited even in the absence of attention. Previous MMN studies have shown that acoustic processing of tone or pitch deviance is relatively preserved in aphasia, whereas the basic acoustic processing of speech stimuli can be impaired (e.g., auditory discrimination). However, no MMN study has yet investigated the higher levels of auditory processing, such as language-specific phonological and/or lexical processing, in individuals with aphasia. Aims: The aim of the current study was to investigate the MMN response of normal and language-disordered subjects to tone stimuli and speech stimuli that incorporate the basic auditory processing (acoustic, acoustic-phonetic) levels of non-speech and speech sound processing, and also the language-specific phonological and lexical levels of spoken word processing. Furthermore, this study aimed to correlate the aphasic MMN data with language performance on a variety of tasks specifically targeted at the different levels of spoken word processing. Methods M Procedures: Six adults with aphasia (71.7 years +/- 3.0) and six healthy age-, gender-, and education-matched controls (72.2 years +/- 5.4) participated in the study. All subjects were right-handed and native speakers of English. Each subject was presented with complex harmonic tone stimuli, differing in pitch or duration, and consonant-vowel (CV) speech stimuli (non-word /de:/versus real world/deI/). The probability of the deviant for each tone or speech contrast was 10%. The subjects were also presented with the same stimuli in behavioural discrimination tasks, and were administered a language assessment battery to measure their auditory comprehension skills. Outcomes O Results: The aphasic subjects demonstrated attenuated MMN responses to complex tone duration deviance and to speech stimuli (words and non-words), and their responses to the frequency, duration, and real word deviant stimuli were found to strongly correlate with performance on the auditory comprehension section of the Western Aphasia Battery (WAB). Furthermore, deficits in attentional lexical decision skills demonstrated by the aphasic subjects correlated with a word-related enhancement demonstrated during the automatic MMN paradigm, providing evidence to support the word advantage effect, thought to reflect the activation of language-specific memory traces in the brain for words. Conclusions: These results indicate that the MMN may be used as a technique for investigating general and more specific auditory comprehension skills of individuals with aphasia, using speech and/or non-speech stimuli, independent of the individual's attention. The combined use of the objective MMN technique and current clinical language assessments may result in improved rehabilitative management of aphasic individuals.
Resumo:
Classic identity negative priming (NP) refers to the finding that when an object is ignored, subsequent naming responses to it are slower than when it has not been previously ignored (Tipper, S.P., 1985. The negative priming effect: inhibitory priming by ignored objects. Q. J. Exp. Psychol. 37A, 571-590). It is unclear whether this phenomenon arises due to the involvement of abstract semantic representations that the ignored object accesses automatically. Contemporary connectionist models propose a key role for the anterior temporal cortex in the representation of abstract semantic knowledge (e.g., McClelland, J.L., Rogers, T.T., 2003. The parallel distributed processing approach to semantic cognition. Nat. Rev. Neurosci. 4, 310-322), suggesting that this region should be involved during performance of the classic identity NP task if it involves semantic access. Using high-field (4 T) event-related functional magnetic resonance imaging, we observed increased BOLD responses in the left anterolateral temporal cortex including the temporal pole that was directly related to the magnitude of each individual's NP effect, supporting a semantic locus. Additional signal increases were observed in the supplementary eye fields (SEF) and left inferior parietal lobule (IPL). (c) 2006 Elsevier Inc. All rights reserved.
Resumo:
We present a comparative study of the influence of dispersion induced phase noise for n-level PSK systems. From the analysis, we conclude that the phase noise influence for classical homodyne/heterodyne PSK systems is entirely determined by the modulation complexity (expressed in terms of constellation diagram) and the analogue demodulation format. On the other hand, the use of digital signal processing (DSP) in homodyne/intradyne systems renders a fiber length dependence originating from the generation of equalization enhanced phase noise. For future high capacity systems, high constellations must be used in order to lower the symbol rate to practically manageable speeds, and this fact puts severe requirements to the signal and local oscillator (LO) linewidths. Our results for the bit-error-rate (BER) floor caused by the phase noise influence in the case of QPSK, 16PSK and 64PSK systems outline tolerance limitations for the LO performance: 5 MHz linewidth (at 3-dB level) for 100 Gbit/s QPSK; 1 MHz for 400 Gbit/s QPSK; 0.1 MHz for 400 Gbit/s 16PSK and 1 Tbit/s 64PSK systems. This defines design constrains for the phase noise impact in distributed-feed-back (DFB) or distributed-Bragg-reflector (DBR) semiconductor lasers, that would allow moving the system capacity from 100 Gbit/s system capacity to 400 Gbit/s in 3 years (1 Tbit/s in 5 years). It is imperative at the same time to increase the analogue to digital conversion (ADC) speed such that the single quadrature symbol rate goes from today's 25 GS/s to 100 GS/s (using two samples per symbol). © 2014 by Walter de Gruyter Berlin/Boston.
Resumo:
In this paper we evaluate and compare two representativeand popular distributed processing engines for large scalebig data analytics, Spark and graph based engine GraphLab. Wedesign a benchmark suite including representative algorithmsand datasets to compare the performances of the computingengines, from performance aspects of running time, memory andCPU usage, network and I/O overhead. The benchmark suite istested on both local computer cluster and virtual machines oncloud. By varying the number of computers and memory weexamine the scalability of the computing engines with increasingcomputing resources (such as CPU and memory). We also runcross-evaluation of generic and graph based analytic algorithmsover graph processing and generic platforms to identify thepotential performance degradation if only one processing engineis available. It is observed that both computing engines showgood scalability with increase of computing resources. WhileGraphLab largely outperforms Spark for graph algorithms, ithas close running time performance as Spark for non-graphalgorithms. Additionally the running time with Spark for graphalgorithms over cloud virtual machines is observed to increaseby almost 100% compared to over local computer clusters.
Resumo:
3D geographic information system (GIS) is data and computation intensive in nature. Internet users are usually equipped with low-end personal computers and network connections of limited bandwidth. Data reduction and performance optimization techniques are of critical importance in quality of service (QoS) management for online 3D GIS. In this research, QoS management issues regarding distributed 3D GIS presentation were studied to develop 3D TerraFly, an interactive 3D GIS that supports high quality online terrain visualization and navigation. ^ To tackle the QoS management challenges, multi-resolution rendering model, adaptive level of detail (LOD) control and mesh simplification algorithms were proposed to effectively reduce the terrain model complexity. The rendering model is adaptively decomposed into sub-regions of up-to-three detail levels according to viewing distance and other dynamic quality measurements. The mesh simplification algorithm was designed as a hybrid algorithm that combines edge straightening and quad-tree compression to reduce the mesh complexity by removing geometrically redundant vertices. The main advantage of this mesh simplification algorithm is that grid mesh can be directly processed in parallel without triangulation overhead. Algorithms facilitating remote accessing and distributed processing of volumetric GIS data, such as data replication, directory service, request scheduling, predictive data retrieving and caching were also proposed. ^ A prototype of the proposed 3D TerraFly implemented in this research demonstrates the effectiveness of our proposed QoS management framework in handling interactive online 3D GIS. The system implementation details and future directions of this research are also addressed in this thesis. ^
Resumo:
Recently honeycomb meshes have been considered as alternative candidates for interconnection networks in parallel and distributed computer systems. This paper presents a solution to one of the open problems about honeycomb meshes—the so-called three disjoint path problem. The problem requires minimizing the length of the longest of any three disjoint paths between 3-degree nodes. This solution provides information on the re-routing of traffic along the network in the presence of faults.
Resumo:
Graphics Processing Units (GPUs) are becoming popular accelerators in modern High-Performance Computing (HPC) clusters. Installing GPUs on each node of the cluster is not efficient resulting in high costs and power consumption as well as underutilisation of the accelerator. The research reported in this paper is motivated towards the use of few physical GPUs by providing cluster nodes access to remote GPUs on-demand for a financial risk application. We hypothesise that sharing GPUs between several nodes, referred to as multi-tenancy, reduces the execution time and energy consumed by an application. Two data transfer modes between the CPU and the GPUs, namely concurrent and sequential, are explored. The key result from the experiments is that multi-tenancy with few physical GPUs using sequential data transfers lowers the execution time and the energy consumed, thereby improving the overall performance of the application.
Resumo:
Part 11: Reference and Conceptual Models
Resumo:
With the CERN LHC program underway, there has been an acceleration of data growth in the High Energy Physics (HEP) field and the usage of Machine Learning (ML) in HEP will be critical during the HL-LHC program when the data that will be produced will reach the exascale. ML techniques have been successfully used in many areas of HEP nevertheless, the development of a ML project and its implementation for production use is a highly time-consuming task and requires specific skills. Complicating this scenario is the fact that HEP data is stored in ROOT data format, which is mostly unknown outside of the HEP community. The work presented in this thesis is focused on the development of a ML as a Service (MLaaS) solution for HEP, aiming to provide a cloud service that allows HEP users to run ML pipelines via HTTP calls. These pipelines are executed by using the MLaaS4HEP framework, which allows reading data, processing data, and training ML models directly using ROOT files of arbitrary size from local or distributed data sources. Such a solution provides HEP users non-expert in ML with a tool that allows them to apply ML techniques in their analyses in a streamlined manner. Over the years the MLaaS4HEP framework has been developed, validated, and tested and new features have been added. A first MLaaS solution has been developed by automatizing the deployment of a platform equipped with the MLaaS4HEP framework. Then, a service with APIs has been developed, so that a user after being authenticated and authorized can submit MLaaS4HEP workflows producing trained ML models ready for the inference phase. A working prototype of this service is currently running on a virtual machine of INFN-Cloud and is compliant to be added to the INFN Cloud portfolio of services.
Resumo:
The scientific success of the LHC experiments at CERN highly depends on the availability of computing resources which efficiently store, process, and analyse the amount of data collected every year. This is ensured by the Worldwide LHC Computing Grid infrastructure that connect computing centres distributed all over the world with high performance network. LHC has an ambitious experimental program for the coming years, which includes large investments and improvements both for the hardware of the detectors and for the software and computing systems, in order to deal with the huge increase in the event rate expected from the High Luminosity LHC (HL-LHC) phase and consequently with the huge amount of data that will be produced. Since few years the role of Artificial Intelligence has become relevant in the High Energy Physics (HEP) world. Machine Learning (ML) and Deep Learning algorithms have been successfully used in many areas of HEP, like online and offline reconstruction programs, detector simulation, object reconstruction, identification, Monte Carlo generation, and surely they will be crucial in the HL-LHC phase. This thesis aims at contributing to a CMS R&D project, regarding a ML "as a Service" solution for HEP needs (MLaaS4HEP). It consists in a data-service able to perform an entire ML pipeline (in terms of reading data, processing data, training ML models, serving predictions) in a completely model-agnostic fashion, directly using ROOT files of arbitrary size from local or distributed data sources. This framework has been updated adding new features in the data preprocessing phase, allowing more flexibility to the user. Since the MLaaS4HEP framework is experiment agnostic, the ATLAS Higgs Boson ML challenge has been chosen as physics use case, with the aim to test MLaaS4HEP and the contribution done with this work.
Resumo:
Scheduling parallel and distributed applications efficiently onto grid environments is a difficult task and a great variety of scheduling heuristics has been developed aiming to address this issue. A successful grid resource allocation depends, among other things, on the quality of the available information about software artifacts and grid resources. In this article, we propose a semantic approach to integrate selection of equivalent resources and selection of equivalent software artifacts to improve the scheduling of resources suitable for a given set of application execution requirements. We also describe a prototype implementation of our approach based on the Integrade grid middleware and experimental results that illustrate its benefits. Copyright (C) 2009 John Wiley & Sons, Ltd.
Resumo:
Idiosyncratic markers are features of genes and genomes that are so unusual that it is unlikely that they evolved more than once in a lineage of organisms. Here we explore further the potential of idiosyncratic markers and changes to typically conserved tRNA sequences for phylogenetic inference. Hard ticks were chosen as the model group because their phylogeny has been studied extensively. Fifty-eight candidate markers from hard ticks ( family Ixodidae) and 22 markers from the subfamily Rhipicephalinae sensu lato were mapped onto phylogenies of these groups. Two of the most interesting markers, features of the secondary structure of two different tRNAs, gave strong support to the hypothesis that species of the Prostriata ( Ixodes spp.) are monophyletic. Previous analyses of genes and morphology did not strongly support this relationship, instead suggesting that the Prostriata is paraphyletic with respect to the Metastriata ( the rest of the hard ticks). Parallel or convergent evolution was not found in the arrangements of mitochondrial genes in ticks nor were there any reversals to the ancestral arthropod character state. Many of the markers identified were phylogenetically informative, whereas others should be informative with study of additional taxa. Idiosyncratic markers and changes to typically conserved nucleotides in tRNAs that are phylogenetically informative were common in this data set, and thus these types of markers might be found in other organisms.