974 resultados para Central Processing


Relevância:

70.00% 70.00%

Publicador:

Resumo:

The change in acoustic characteristics in personal computers to console gaming and home entertainment systems with the change in the Graphics Processing Unit (GPU), is presented. The tests are carried out using identical configurations of the software and system hardware. The prime components of the hardware used in the project are central processing unit, motherboard, hard disc drive, memory, power supply, optical drive, and additional cooling system. The results from the measurements taken for each GPU tested are analyzed and compared. The test results are obtained using a photo tachometer and reflective tape adhered to one particular fan blade. The test shows that loudness is a psychoacoustic metric developed by Zwicker and Fastal that aims to quantify how loud a sound is perceived as compared to a standard sound. The acoustic experiment reveals that the inherent noise generation mechanism increases with the increase of the complexity of the cooling solution.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

Microprocessori basati su singolo processore (CPU), hanno visto una rapida crescita di performances ed un abbattimento dei costi per circa venti anni. Questi microprocessori hanno portato una potenza di calcolo nell’ordine del GFLOPS (Giga Floating Point Operation per Second) sui PC Desktop e centinaia di GFLOPS su clusters di server. Questa ascesa ha portato nuove funzionalità nei programmi, migliori interfacce utente e tanti altri vantaggi. Tuttavia questa crescita ha subito un brusco rallentamento nel 2003 a causa di consumi energetici sempre più elevati e problemi di dissipazione termica, che hanno impedito incrementi di frequenza di clock. I limiti fisici del silicio erano sempre più vicini. Per ovviare al problema i produttori di CPU (Central Processing Unit) hanno iniziato a progettare microprocessori multicore, scelta che ha avuto un impatto notevole sulla comunità degli sviluppatori, abituati a considerare il software come una serie di comandi sequenziali. Quindi i programmi che avevano sempre giovato di miglioramenti di prestazioni ad ogni nuova generazione di CPU, non hanno avuto incrementi di performance, in quanto essendo eseguiti su un solo core, non beneficiavano dell’intera potenza della CPU. Per sfruttare appieno la potenza delle nuove CPU la programmazione concorrente, precedentemente utilizzata solo su sistemi costosi o supercomputers, è diventata una pratica sempre più utilizzata dagli sviluppatori. Allo stesso tempo, l’industria videoludica ha conquistato una fetta di mercato notevole: solo nel 2013 verranno spesi quasi 100 miliardi di dollari fra hardware e software dedicati al gaming. Le software houses impegnate nello sviluppo di videogames, per rendere i loro titoli più accattivanti, puntano su motori grafici sempre più potenti e spesso scarsamente ottimizzati, rendendoli estremamente esosi in termini di performance. Per questo motivo i produttori di GPU (Graphic Processing Unit), specialmente nell’ultimo decennio, hanno dato vita ad una vera e propria rincorsa alle performances che li ha portati ad ottenere dei prodotti con capacità di calcolo vertiginose. Ma al contrario delle CPU che agli inizi del 2000 intrapresero la strada del multicore per continuare a favorire programmi sequenziali, le GPU sono diventate manycore, ovvero con centinaia e centinaia di piccoli cores che eseguono calcoli in parallelo. Questa immensa capacità di calcolo può essere utilizzata in altri campi applicativi? La risposta è si e l’obiettivo di questa tesi è proprio quello di constatare allo stato attuale, in che modo e con quale efficienza pùo un software generico, avvalersi dell’utilizzo della GPU invece della CPU.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

BACKGROUND Low vitamin D is implicated in various chronic pain conditions with, however, inconclusive findings. Vitamin D might play an important role in mechanisms being involved in central processing of evoked pain stimuli but less so for spontaneous clinical pain. OBJECTIVE This study aims to examine the relation between low serum levels of 25-hydroxyvitamin D3 (25-OH D) and mechanical pain sensitivity. DESIGN We studied 174 patients (mean age 48 years, 53% women) with chronic pain. A standardized pain provocation test was applied, and pain intensity was rated on a numerical analogue scale (0-10). The widespread pain index and symptom severity score (including fatigue, waking unrefreshed, and cognitive symptoms) following the 2010 American College of Rheumatology preliminary diagnostic criteria for fibromyalgia were also assessed. Serum 25-OH D levels were measured with a chemiluminescent immunoassay. RESULTS Vitamin deficiency (25-OH D < 50 nmol/L) was present in 71% of chronic pain patients; another 21% had insufficient vitamin D (25-OH D < 75 nmol/L). After adjustment for demographic and clinical variables, there was a mean ± standard error of the mean increase in pain intensity of 0.61 ± 0.25 for each 25 nmol/L decrease in 25-OH D (P = 0.011). Lower 25-OH D levels were also related to greater symptom severity (r = -0.21, P = 0.008) but not to the widespread pain index (P = 0.83) and fibromyalgia (P = 0.51). CONCLUSIONS The findings suggest a role of low vitamin D levels for heightened central sensitivity, particularly augmented pain processing upon mechanical stimulation in chronic pain patients. Vitamin D seems comparably less important for self-reports of spontaneous chronic pain.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

The rectum has a unique physiological role as a sensory organ and differs in its afferent innervation from other gut organs that do not normally mediate conscious sensation. We compared the central processing of human esophageal, duodenal, and rectal sensation using cortical evoked potentials (CEP) in 10 healthy volunteers (age range 21-34 yr). Esophageal and duodenal CEP had similar morphology in all subjects, whereas rectal CEP had two different but reproducible morphologies. The rectal CEP latency to the first component P1 (69 ms) was shorter than both duodenal (123 ms; P = 0.008) and esophageal CEP latencies (106 ms; P = 0.004). The duodenal CEP amplitude of the P1-N1 component (5.0 µV) was smaller than that of the corresponding esophageal component (5.7 µV; P = 0.04) but similar to that of the corresponding rectal component (6.5 µV; P = 0.25). This suggests that rectal sensation is either mediated by faster-conducting afferent pathways or that there is a difference in the orientation or volume of cortical neurons representing the different gut organs. In conclusion, the physiological and anatomic differences between gut organs are reflected in differences in the characteristics of their afferent pathways and cortical processing.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

This thesis describes advances in the characterisation, calibration and data processing of optical coherence tomography (OCT) systems. Femtosecond (fs) laser inscription was used for producing OCT-phantoms. Transparent materials are generally inert to infra-red radiations, but with fs lasers material modification occurs via non-linear processes when the highly focused light source interacts with the materials. This modification is confined to the focal volume and is highly reproducible. In order to select the best inscription parameters, combination of different inscription parameters were tested, using three fs laser systems, with different operating properties, on a variety of materials. This facilitated the understanding of the key characteristics of the produced structures with the aim of producing viable OCT-phantoms. Finally, OCT-phantoms were successfully designed and fabricated in fused silica. The use of these phantoms to characterise many properties (resolution, distortion, sensitivity decay, scan linearity) of an OCT system was demonstrated. Quantitative methods were developed to support the characterisation of an OCT system collecting images from phantoms and also to improve the quality of the OCT images. Characterisation methods include the measurement of the spatially variant resolution (point spread function (PSF) and modulation transfer function (MTF)), sensitivity and distortion. Processing of OCT data is a computer intensive process. Standard central processing unit (CPU) based processing might take several minutes to a few hours to process acquired data, thus data processing is a significant bottleneck. An alternative choice is to use expensive hardware-based processing such as field programmable gate arrays (FPGAs). However, recently graphics processing unit (GPU) based data processing methods have been developed to minimize this data processing and rendering time. These processing techniques include standard-processing methods which includes a set of algorithms to process the raw data (interference) obtained by the detector and generate A-scans. The work presented here describes accelerated data processing and post processing techniques for OCT systems. The GPU based processing developed, during the PhD, was later implemented into a custom built Fourier domain optical coherence tomography (FD-OCT) system. This system currently processes and renders data in real time. Processing throughput of this system is currently limited by the camera capture rate. OCTphantoms have been heavily used for the qualitative characterization and adjustment/ fine tuning of the operating conditions of OCT system. Currently, investigations are under way to characterize OCT systems using our phantoms. The work presented in this thesis demonstrate several novel techniques of fabricating OCT-phantoms and accelerating OCT data processing using GPUs. In the process of developing phantoms and quantitative methods, a thorough understanding and practical knowledge of OCT and fs laser processing systems was developed. This understanding leads to several novel pieces of research that are not only relevant to OCT but have broader importance. For example, extensive understanding of the properties of fs inscribed structures will be useful in other photonic application such as making of phase mask, wave guides and microfluidic channels. Acceleration of data processing with GPUs is also useful in other fields.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

* The following text has been originally published in the Proceedings of the Language Recourses and Evaluation Conference held in Lisbon, Portugal, 2004, under the title of "Towards Intelligent Written Cultural Heritage Processing - Lexical processing". I present here a revised contribution of the aforementioned paper and I add here the latest efforts done in the Center for Computational Linguistic in Prague in the field under discussion.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The emergence of pseudo-marginal algorithms has led to improved computational efficiency for dealing with complex Bayesian models with latent variables. Here an unbiased estimator of the likelihood replaces the true likelihood in order to produce a Bayesian algorithm that remains on the marginal space of the model parameter (with latent variables integrated out), with a target distribution that is still the correct posterior distribution. Very efficient proposal distributions can be developed on the marginal space relative to the joint space of model parameter and latent variables. Thus psuedo-marginal algorithms tend to have substantially better mixing properties. However, for pseudo-marginal approaches to perform well, the likelihood has to be estimated rather precisely. This can be difficult to achieve in complex applications. In this paper we propose to take advantage of multiple central processing units (CPUs), that are readily available on most standard desktop computers. Here the likelihood is estimated independently on the multiple CPUs, with the ultimate estimate of the likelihood being the average of the estimates obtained from the multiple CPUs. The estimate remains unbiased, but the variability is reduced. We compare and contrast two different technologies that allow the implementation of this idea, both of which require a negligible amount of extra programming effort. The superior performance of this idea over the standard approach is demonstrated on simulated data from a stochastic volatility model.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Migraine shows strong familial aggregation. However, the number of genes involved in the disorder is unknown and not identified. Nitric oxide is involved in the central processing of pain stimuli and plays an important role in the regulation of basal or stimulated vasodilation. Nitric oxide synthase, which controls the synthesis of nitric oxide, could possibly be a cause, or candidate gene, in migraine etiology. In this study, we detected a polymorphism for endothelial nitric oxide synthase by polymerase chain reaction and tested this for association and linkage to migraine. Results from the study did not show an association of the nitric oxide synthase microsatellite when tested in 91 affected and 85 unaffected individuals. Using the FASTLINK program for parametric linkage analysis, the polymorphism did not show significant linkage to migraine when tested in four migraine pedigrees composed of 116 individuals, 52 affected. Total LOD scores excluded linkage up to 8.5 cM between the nitric oxide synthase polymorphism and migraine. Results using the nonparametric affected pedigree member form of analysis also did not support a role for this gene in migraine etiology.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The evolution of technological systems is hindered by systemic components, referred to as reverse salients, which fail to deliver the necessary level of technological performance thereby inhibiting the performance delivery of the system as a whole. This paper develops a performance gap measure of reverse salience and applies this measurement in the study of the PC (personal computer) technological system, focusing on the evolutions of firstly the CPU (central processing unit) and PC game sub-systems, and secondly the GPU (graphics processing unit) and PC game sub-systems. The measurement of the temporal behavior of reverse salience indicates that the PC game sub-system is the reverse salient, continuously trailing behind the technological performance of the CPU and GPU sub-systems from 1996 through 2006. The technological performance of the PC game sub-system as a reverse salient trails that of the CPU sub-system by up to 2300 MHz with a gradually decreasing performance disparity in recent years. In contrast, the dynamics of the PC game sub-system as a reverse salient trails the GPU sub-system with an ever increasing performance gap throughout the timeframe of analysis. In addition, we further discuss the research and managerial implications of our findings.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Sensor networks represent an attractive tool to observe the physical world. Networks of tiny sensors can be used to detect a fire in a forest, to monitor the level of pollution in a river, or to check on the structural integrity of a bridge. Application-specific deployments of static-sensor networks have been widely investigated. Commonly, these networks involve a centralized data-collection point and no sharing of data outside the organization that owns it. Although this approach can accommodate many application scenarios, it significantly deviates from the pervasive computing vision of ubiquitous sensing where user applications seamlessly access anytime, anywhere data produced by sensors embedded in the surroundings. With the ubiquity and ever-increasing capabilities of mobile devices, urban environments can help give substance to the ubiquitous sensing vision through Urbanets, spontaneously created urban networks. Urbanets consist of mobile multi-sensor devices, such as smart phones and vehicular systems, public sensor networks deployed by municipalities, and individual sensors incorporated in buildings, roads, or daily artifacts. My thesis is that "multi-sensor mobile devices can be successfully programmed to become the underpinning elements of an open, infrastructure-less, distributed sensing platform that can bring sensor data out of their traditional close-loop networks into everyday urban applications". Urbanets can support a variety of services ranging from emergency and surveillance to tourist guidance and entertainment. For instance, cars can be used to provide traffic information services to alert drivers to upcoming traffic jams, and phones to provide shopping recommender services to inform users of special offers at the mall. Urbanets cannot be programmed using traditional distributed computing models, which assume underlying networks with functionally homogeneous nodes, stable configurations, and known delays. Conversely, Urbanets have functionally heterogeneous nodes, volatile configurations, and unknown delays. Instead, solutions developed for sensor networks and mobile ad hoc networks can be leveraged to provide novel architectures that address Urbanet-specific requirements, while providing useful abstractions that hide the network complexity from the programmer. This dissertation presents two middleware architectures that can support mobile sensing applications in Urbanets. Contory offers a declarative programming model that views Urbanets as a distributed sensor database and exposes an SQL-like interface to developers. Context-aware Migratory Services provides a client-server paradigm, where services are capable of migrating to different nodes in the network in order to maintain a continuous and semantically correct interaction with clients. Compared to previous approaches to supporting mobile sensing urban applications, our architectures are entirely distributed and do not assume constant availability of Internet connectivity. In addition, they allow on-demand collection of sensor data with the accuracy and at the frequency required by every application. These architectures have been implemented in Java and tested on smart phones. They have proved successful in supporting several prototype applications and experimental results obtained in ad hoc networks of phones have demonstrated their feasibility with reasonable performance in terms of latency, memory, and energy consumption.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The following problem is considered. Given the locations of the Central Processing Unit (ar;the terminals which have to communicate with it, to determine the number and locations of the concentrators and to assign the terminals to the concentrators in such a way that the total cost is minimized. There is alao a fixed cost associated with each concentrator. There is ail upper limit to the number of terminals which can be connected to a concentrator. The terminals can be connected directly to the CPU also In this paper it is assumed that the concentrators can bo located anywhere in the area A containing the CPU and the terminals. Then this becomes a multimodal optimization problem. In the proposed algorithm a stochastic automaton is used as a search device to locate the minimum of the multimodal cost function . The proposed algorithm involves the following. The area A containing the CPU and the terminals is divided into an arbitrary number of regions (say K). An approximate value for the number of concentrators is assumed (say m). The optimum number is determined by iteration later The m concentrators can be assigned to the K regions in (mk) ways (m > K) or (km) ways (K>m).(All possible assignments are feasible, i.e. a region can contain 0,1,…, to concentrators). Each possible assignment is assumed to represent a state of the stochastic variable structure automaton. To start with, all the states are assigned equal probabilities. At each stage of the search the automaton visits a state according to the current probability distribution. At each visit the automaton selects a 'point' inside that state with uniform probability. The cost associated with that point is calculated and the average cost of that state is updated. Then the probabilities of all the states are updated. The probabilities are taken to bo inversely proportional to the average cost of the states After a certain number of searches the search probabilities become stationary and the automaton visits a particular state again and again. Then the automaton is said to have converged to that state Then by conducting a local gradient search within that state the exact locations of the concentrators are determined This algorithm was applied to a set of test problems and the results were compared with those given by Cooper's (1964, 1967) EAC algorithm and on the average it was found that the proposed algorithm performs better.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Acute pain has substantial survival value because of its protective function in the everyday environment. Instead, chronic pain lacks survival and adaptive function, causes great amount of individual suffering, and consumes the resources of the society due to the treatment costs and loss of production. The treatment of chronic pain has remained challenging because of inadequate understanding of mechanisms working at different levels of the nervous system in the development, modulation, and maintenance of chronic pain. Especially in unclear chronic pain conditions the treatment may be suboptimal because it can not be targeted to the underlying mechanisms. Noninvasive neuroimaging techniques have greatly contributed to our understanding of brain activity associated with pain in healthy individuals. Many previous studies, focusing on brain activations to acute experimental pain in healthy individuals, have consistently demonstrated a widely-distributed network of brain regions that participate in the processing of acute pain. The aim of the present thesis was to employ non-invasive brain imaging to better understand the brain mechanisms in patients suffering from chronic pain. In Study I, we used magnetoencephalography (MEG) to measure cortical responses to painful laser stimulation in healthy individuals for optimization of the stimulus parameters for patient studies. In Studies II and III, we monitored with MEG the cortical processing of touch and acute pain in patients with complex regional pain syndrome (CRPS). We found persisting plastic changes in the hand representation area of the primary somatosensory (SI) cortex, suggesting that chronic pain causes cortical reorganization. Responses in the posterior parietal cortex to both tactile and painful laser stimulation were attenuated, which could be associated with neglect-like symptoms of the patients. The primary motor cortex reactivity to acute pain was reduced in patients who had stronger spontaneous pain and weaker grip strength in the painful hand. The tight coupling between spontaneous pain and motor dysfunction supports the idea that motor rehabilitation is important in CRPS. In Studies IV and V we used MEG and functional magnetic resonance imaging (fMRI) to investigate the central processing of touch and acute pain in patients who suffered from recurrent herpes simplex virus infections and from chronic widespread pain in one side of the body. With MEG, we found plastic changes in the SI cortex, suggesting that many different types of chronic pain may be associated with similar cortical reorganization. With fMRI, we found functional and morphological changes in the central pain circuitry, as an indication of central contribution for the pain. These results show that chronic pain is associated with morphological and functional changes in the brain, and that such changes can be measured with functional imaging.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Frequency response analysis is critical in understanding the steady and transient state behavior of any electrical network. Network analyzeror frequency response analyzer is used to determine the frequency response of an electrical network. This paper deals with the design of an inexpensive digitally controlled Network Analyzer. The frequency range of the network analyzer is from 10Hz to 50kHz (suitable range for system studies on most power electronics apparatus). It is composed of a microcontroller (as central processing unit) and a personal computer (as analyzer and display). The communication between the microcontroller and personal computer is established through one of the USB ports. The testing and evaluation of the analyzer is done with RC, RLC and multi-resonant circuits. The design steps, basis of analysis, experimental results, limitation in bandwidth and possible techniques for improvement in performances are presented.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Baltic Sea herring is a traditional raw material for the German fish processing industry and the fresh fish market. This applies also for the spring herring of the spawning population of the waters around the island of Rügen. Reduction of the fat content to about 5 % during the spawning cycle limits the processing possibilities of mature herring from this area. Failures in taste and odour (tainting), a common problem of the past have not been detected in the last 3 years. Infestation by nematodes are comparable to other herring stocks and contamination levels of organic and inorganic contaminants are well below allowable limits. The annual German fishing quota of about 85000 t of Baltic Sea herring is now utilised only to 10 %. For a stronger utilization of this stock as in the 70th and 80th , there are scarcely prerequisites. The project of a central processing plant on the island Rügen for about 50000 t of herring as raw material is not realistic. The answer to the question asked at the beginning of this article, whether Baltic Sea herring represents a raw material for the German fish processing industry, is YES, dispite some restrictions.