989 resultados para Central Processing
Resumo:
Little is known of the neural mechanisms of marsupial olfaction. However, functional magnetic resonance imaging (fMRI) has made it possible to visualize dynamic brain function in mammals without invasion. In this study, central processing of urinary pheromones was investigated in the brown antechinus, Antechinus stuartii, using fMRI. Images were obtained from 18 subjects (11 males, 7 females) in response to conspecific urinary olfactory stimuli. Significant indiscriminate activation occurred in the accessory olfactory bulb, entorhinal, frontal, and parietal cortices in response to both male and female urine. The paraventricular nucleus of hypothalamus, ventrolateral thalamic nucleus, and medial preoptic area were only activated in response to male urine. Results of this MRI study indicate that projections of accessory olfactory system are activated by chemo-sensory cues. Furthermore, it appears that, based on these experiments, urinary pheromones may act on the hypothalamo-pituitary-adrenocortical axis via the paraventricular nucleus of the hypothalamus and may play an important role in the unique life history pattern of A. stuartii. Finally, this study has demonstrated that fMRI may be a powerful tool for investigations of olfactory processes in mammals.
Resumo:
The role of GABA in the central processing of complex auditory signals is not fully understood. We have studied the involvement of GABA(A)-mediated inhibition in the processing of birdsong, a learned vocal communication signal requiring intact hearing for its development and maintenance. We focused on caudomedial nidopallium (NCM), an area analogous to parts of the mammalian auditory cortex with selective responses to birdsong. We present evidence that GABA(A)-mediated inhibition plays a pronounced role in NCM`s auditory processing of birdsong. Using immunocytochemistry, we show that approximately half of NCM`s neurons are GABAergic. Whole cell patch-clamp recordings in a slice preparation demonstrate that, at rest, spontaneously active GABAergic synapses inhibit excitatory inputs onto NCM neurons via GABA(A) receptors. Multi-electrode electrophysiological recordings in awake birds show that local blockade of GABA(A)-mediated inhibition in NCM markedly affects the temporal pattern of song-evoked responses in NCM without modifications in frequency tuning. Surprisingly, this blockade increases the phasic and largely suppresses the tonic response component, reflecting dynamic relationships of inhibitory networks that could include disinhibition. Thus processing of learned natural communication sounds in songbirds, and possibly other vocal learners, may depend on complex interactions of inhibitory networks.
Resumo:
Microprocessori basati su singolo processore (CPU), hanno visto una rapida crescita di performances ed un abbattimento dei costi per circa venti anni. Questi microprocessori hanno portato una potenza di calcolo nell’ordine del GFLOPS (Giga Floating Point Operation per Second) sui PC Desktop e centinaia di GFLOPS su clusters di server. Questa ascesa ha portato nuove funzionalità nei programmi, migliori interfacce utente e tanti altri vantaggi. Tuttavia questa crescita ha subito un brusco rallentamento nel 2003 a causa di consumi energetici sempre più elevati e problemi di dissipazione termica, che hanno impedito incrementi di frequenza di clock. I limiti fisici del silicio erano sempre più vicini. Per ovviare al problema i produttori di CPU (Central Processing Unit) hanno iniziato a progettare microprocessori multicore, scelta che ha avuto un impatto notevole sulla comunità degli sviluppatori, abituati a considerare il software come una serie di comandi sequenziali. Quindi i programmi che avevano sempre giovato di miglioramenti di prestazioni ad ogni nuova generazione di CPU, non hanno avuto incrementi di performance, in quanto essendo eseguiti su un solo core, non beneficiavano dell’intera potenza della CPU. Per sfruttare appieno la potenza delle nuove CPU la programmazione concorrente, precedentemente utilizzata solo su sistemi costosi o supercomputers, è diventata una pratica sempre più utilizzata dagli sviluppatori. Allo stesso tempo, l’industria videoludica ha conquistato una fetta di mercato notevole: solo nel 2013 verranno spesi quasi 100 miliardi di dollari fra hardware e software dedicati al gaming. Le software houses impegnate nello sviluppo di videogames, per rendere i loro titoli più accattivanti, puntano su motori grafici sempre più potenti e spesso scarsamente ottimizzati, rendendoli estremamente esosi in termini di performance. Per questo motivo i produttori di GPU (Graphic Processing Unit), specialmente nell’ultimo decennio, hanno dato vita ad una vera e propria rincorsa alle performances che li ha portati ad ottenere dei prodotti con capacità di calcolo vertiginose. Ma al contrario delle CPU che agli inizi del 2000 intrapresero la strada del multicore per continuare a favorire programmi sequenziali, le GPU sono diventate manycore, ovvero con centinaia e centinaia di piccoli cores che eseguono calcoli in parallelo. Questa immensa capacità di calcolo può essere utilizzata in altri campi applicativi? La risposta è si e l’obiettivo di questa tesi è proprio quello di constatare allo stato attuale, in che modo e con quale efficienza pùo un software generico, avvalersi dell’utilizzo della GPU invece della CPU.
Resumo:
BACKGROUND Low vitamin D is implicated in various chronic pain conditions with, however, inconclusive findings. Vitamin D might play an important role in mechanisms being involved in central processing of evoked pain stimuli but less so for spontaneous clinical pain. OBJECTIVE This study aims to examine the relation between low serum levels of 25-hydroxyvitamin D3 (25-OH D) and mechanical pain sensitivity. DESIGN We studied 174 patients (mean age 48 years, 53% women) with chronic pain. A standardized pain provocation test was applied, and pain intensity was rated on a numerical analogue scale (0-10). The widespread pain index and symptom severity score (including fatigue, waking unrefreshed, and cognitive symptoms) following the 2010 American College of Rheumatology preliminary diagnostic criteria for fibromyalgia were also assessed. Serum 25-OH D levels were measured with a chemiluminescent immunoassay. RESULTS Vitamin deficiency (25-OH D < 50 nmol/L) was present in 71% of chronic pain patients; another 21% had insufficient vitamin D (25-OH D < 75 nmol/L). After adjustment for demographic and clinical variables, there was a mean ± standard error of the mean increase in pain intensity of 0.61 ± 0.25 for each 25 nmol/L decrease in 25-OH D (P = 0.011). Lower 25-OH D levels were also related to greater symptom severity (r = -0.21, P = 0.008) but not to the widespread pain index (P = 0.83) and fibromyalgia (P = 0.51). CONCLUSIONS The findings suggest a role of low vitamin D levels for heightened central sensitivity, particularly augmented pain processing upon mechanical stimulation in chronic pain patients. Vitamin D seems comparably less important for self-reports of spontaneous chronic pain.
Resumo:
The rectum has a unique physiological role as a sensory organ and differs in its afferent innervation from other gut organs that do not normally mediate conscious sensation. We compared the central processing of human esophageal, duodenal, and rectal sensation using cortical evoked potentials (CEP) in 10 healthy volunteers (age range 21-34 yr). Esophageal and duodenal CEP had similar morphology in all subjects, whereas rectal CEP had two different but reproducible morphologies. The rectal CEP latency to the first component P1 (69 ms) was shorter than both duodenal (123 ms; P = 0.008) and esophageal CEP latencies (106 ms; P = 0.004). The duodenal CEP amplitude of the P1-N1 component (5.0 µV) was smaller than that of the corresponding esophageal component (5.7 µV; P = 0.04) but similar to that of the corresponding rectal component (6.5 µV; P = 0.25). This suggests that rectal sensation is either mediated by faster-conducting afferent pathways or that there is a difference in the orientation or volume of cortical neurons representing the different gut organs. In conclusion, the physiological and anatomic differences between gut organs are reflected in differences in the characteristics of their afferent pathways and cortical processing.
Resumo:
This thesis describes advances in the characterisation, calibration and data processing of optical coherence tomography (OCT) systems. Femtosecond (fs) laser inscription was used for producing OCT-phantoms. Transparent materials are generally inert to infra-red radiations, but with fs lasers material modification occurs via non-linear processes when the highly focused light source interacts with the materials. This modification is confined to the focal volume and is highly reproducible. In order to select the best inscription parameters, combination of different inscription parameters were tested, using three fs laser systems, with different operating properties, on a variety of materials. This facilitated the understanding of the key characteristics of the produced structures with the aim of producing viable OCT-phantoms. Finally, OCT-phantoms were successfully designed and fabricated in fused silica. The use of these phantoms to characterise many properties (resolution, distortion, sensitivity decay, scan linearity) of an OCT system was demonstrated. Quantitative methods were developed to support the characterisation of an OCT system collecting images from phantoms and also to improve the quality of the OCT images. Characterisation methods include the measurement of the spatially variant resolution (point spread function (PSF) and modulation transfer function (MTF)), sensitivity and distortion. Processing of OCT data is a computer intensive process. Standard central processing unit (CPU) based processing might take several minutes to a few hours to process acquired data, thus data processing is a significant bottleneck. An alternative choice is to use expensive hardware-based processing such as field programmable gate arrays (FPGAs). However, recently graphics processing unit (GPU) based data processing methods have been developed to minimize this data processing and rendering time. These processing techniques include standard-processing methods which includes a set of algorithms to process the raw data (interference) obtained by the detector and generate A-scans. The work presented here describes accelerated data processing and post processing techniques for OCT systems. The GPU based processing developed, during the PhD, was later implemented into a custom built Fourier domain optical coherence tomography (FD-OCT) system. This system currently processes and renders data in real time. Processing throughput of this system is currently limited by the camera capture rate. OCTphantoms have been heavily used for the qualitative characterization and adjustment/ fine tuning of the operating conditions of OCT system. Currently, investigations are under way to characterize OCT systems using our phantoms. The work presented in this thesis demonstrate several novel techniques of fabricating OCT-phantoms and accelerating OCT data processing using GPUs. In the process of developing phantoms and quantitative methods, a thorough understanding and practical knowledge of OCT and fs laser processing systems was developed. This understanding leads to several novel pieces of research that are not only relevant to OCT but have broader importance. For example, extensive understanding of the properties of fs inscribed structures will be useful in other photonic application such as making of phase mask, wave guides and microfluidic channels. Acceleration of data processing with GPUs is also useful in other fields.
Resumo:
* The following text has been originally published in the Proceedings of the Language Recourses and Evaluation Conference held in Lisbon, Portugal, 2004, under the title of "Towards Intelligent Written Cultural Heritage Processing - Lexical processing". I present here a revised contribution of the aforementioned paper and I add here the latest efforts done in the Center for Computational Linguistic in Prague in the field under discussion.
Resumo:
Mestrado em Engenharia Informática, Área de Especialização em Tecnologias do Conhecimento e da Decisão
Resumo:
O principal motivo para a realização deste trabalho consistiu no desenvolvimento de tecnologia robótica, que permitisse o mergulho e ascenção de grandes profundidades de uma forma eficiente. O trabalho realizado contemplou uma fase inicial de análise e estudo dos sistemas robóticos existentes no mercado, bem como métodos utilizados identificando vantagens e desvantagens em relação ao tipo de veículo pretendido. Seguiu-se uma fase de projeto e estudo mecânico, com o intuito de desenvolver um veículo com variação de lastro através do bombeamento de óleo para um reservatório exterior, para variar o volume total do veículo, variando assim a sua flutuabilidade. Para operar a grande profundidade com AUV’s é conveniente poder efetuar o trajeto up/down de forma eficiente e a variação de lastro apresenta vantagens nesse aspeto. No entanto, contrariamente aos gliders o interesse está na possibilidade de subir e descer na vertical. Para controlar a flutuabilidade e ao mesmo tempo analisar a profundidade do veículo em tempo real, foi necessario o uso de um sistema de processamento central que adquirisse a informação do sensor de pressão e comunicasse com o sistema de variação de lastro, de modo a fazer o controlo de posicionamento vertical desejado. Do ponto de vista tecnológico procurou-se desenvolver e avaliar soluções de variação de volume intermédias entre as dos gliders (poucas gramas) e as dos ROV’s workclass (dezenas ou centenas de kilogramas). Posteriormente, foi desenvolvido um simulador em matlab (Simulink) que reflete o comportamento da descida do veículo, permitindo alterar parâmetros do veículo e analisar os seus resultados práticos, de modo a poder ajustar o veículo real. Nos resultados simulados verificamos o cálculo das velocidades limite atingidas pelo veículo com diferentes coeficientes de atrito, bem como o comportamento da variação de lastro do veículo no seu deslocamento vertical. Sistema de Variação de Lastro para Controlo de Movimento Vertical de Veículo Subaquático Por fim, verificou-se ainda a capacidade de controlo do veículo para uma determinada profundiade, e foi feita a comparação entre estas simulações executadas com parâmetros muito próximos do ensaio real e os respetivos ensaios reais.
Resumo:
Dissertação para obtenção do Grau de Mestre em Engenharia Informática
Resumo:
In this manuscript we tackle the problem of semidistributed user selection with distributed linear precoding for sum rate maximization in multiuser multicell systems. A set of adjacent base stations (BS) form a cluster in order to perform coordinated transmission to cell-edge users, and coordination is carried out through a central processing unit (CU). However, the message exchange between BSs and the CU is limited to scheduling control signaling and no user data or channel state information (CSI) exchange is allowed. In the considered multicell coordinated approach, each BS has its own set of cell-edge users and transmits only to one intended user while interference to non-intended users at other BSs is suppressed by signal steering (precoding). We use two distributed linear precoding schemes, Distributed Zero Forcing (DZF) and Distributed Virtual Signalto-Interference-plus-Noise Ratio (DVSINR). Considering multiple users per cell and the backhaul limitations, the BSs rely on local CSI to solve the user selection problem. First we investigate how the signal-to-noise-ratio (SNR) regime and the number of antennas at the BSs impact the effective channel gain (the magnitude of the channels after precoding) and its relationship with multiuser diversity. Considering that user selection must be based on the type of implemented precoding, we develop metrics of compatibility (estimations of the effective channel gains) that can be computed from local CSI at each BS and reported to the CU for scheduling decisions. Based on such metrics, we design user selection algorithms that can find a set of users that potentially maximizes the sum rate. Numerical results show the effectiveness of the proposed metrics and algorithms for different configurations of users and antennas at the base stations.
Resumo:
Dissertação para obtenção do Grau de Mestre em Engenharia Informática
Resumo:
Dissertação para obtenção do Grau de Mestre em Engenharia Biomédica
Resumo:
Dissertação para obtenção do Grau de Mestre em Engenharia Biomédica