939 resultados para INFORMATION PROCESSING
Resumo:
La investigación para el conocimiento del cerebro es una ciencia joven, su inicio se remonta a Santiago Ramón y Cajal en 1888. Desde esta fecha a nuestro tiempo la neurociencia ha avanzado mucho en el desarrollo de técnicas que permiten su estudio. Desde la neurociencia cognitiva hoy se explican muchos modelos que nos permiten acercar a nuestro entendimiento a capacidades cognitivas complejas. Aun así hablamos de una ciencia casi en pañales que tiene un lago recorrido por delante. Una de las claves del éxito en los estudios de la función cerebral ha sido convertirse en una disciplina que combina conocimientos de diversas áreas: de la física, de las matemáticas, de la estadística y de la psicología. Esta es la razón por la que a lo largo de este trabajo se entremezclan conceptos de diferentes campos con el objetivo de avanzar en el conocimiento de un tema tan complejo como el que nos ocupa: el entendimiento de la mente humana. Concretamente, esta tesis ha estado dirigida a la integración multimodal de la magnetoencefalografía (MEG) y la resonancia magnética ponderada en difusión (dMRI). Estas técnicas son sensibles, respectivamente, a los campos magnéticos emitidos por las corrientes neuronales, y a la microestructura de la materia blanca cerebral. A lo largo de este trabajo hemos visto que la combinación de estas técnicas permiten descubrir sinergias estructurofuncionales en el procesamiento de la información en el cerebro sano y en el curso de patologías neurológicas. Más específicamente en este trabajo se ha estudiado la relación entre la conectividad funcional y estructural y en cómo fusionarlas. Para ello, se ha cuantificado la conectividad funcional mediante el estudio de la sincronización de fase o la correlación de amplitudes entre series temporales, de esta forma se ha conseguido un índice que mide la similitud entre grupos neuronales o regiones cerebrales. Adicionalmente, la cuantificación de la conectividad estructural a partir de imágenes de resonancia magnética ponderadas en difusión, ha permitido hallar índices de la integridad de materia blanca o de la fuerza de las conexiones estructurales entre regiones. Estas medidas fueron combinadas en los capítulos 3, 4 y 5 de este trabajo siguiendo tres aproximaciones que iban desde el nivel más bajo al más alto de integración. Finalmente se utilizó la información fusionada de MEG y dMRI para la caracterización de grupos de sujetos con deterioro cognitivo leve, la detección de esta patología resulta relevante en la identificación precoz de la enfermedad de Alzheimer. Esta tesis está dividida en seis capítulos. En el capítulos 1 se establece un contexto para la introducción de la connectómica dentro de los campos de la neuroimagen y la neurociencia. Posteriormente en este capítulo se describen los objetivos de la tesis, y los objetivos específicos de cada una de las publicaciones científicas que resultaron de este trabajo. En el capítulo 2 se describen los métodos para cada técnica que fue empleada: conectividad estructural, conectividad funcional en resting state, redes cerebrales complejas y teoría de grafos y finalmente se describe la condición de deterioro cognitivo leve y el estado actual en la búsqueda de nuevos biomarcadores diagnósticos. En los capítulos 3, 4 y 5 se han incluido los artículos científicos que fueron producidos a lo largo de esta tesis. Estos han sido incluidos en el formato de la revista en que fueron publicados, estando divididos en introducción, materiales y métodos, resultados y discusión. Todos los métodos que fueron empleados en los artículos están descritos en el capítulo 2 de la tesis. Finalmente, en el capítulo 6 se concluyen los resultados generales de la tesis y se discuten de forma específica los resultados de cada artículo. ABSTRACT In this thesis I apply concepts from mathematics, physics and statistics to the neurosciences. This field benefits from the collaborative work of multidisciplinary teams where physicians, psychologists, engineers and other specialists fight for a common well: the understanding of the brain. Research on this field is still in its early years, being its birth attributed to the neuronal theory of Santiago Ramo´n y Cajal in 1888. In more than one hundred years only a very little percentage of the brain functioning has been discovered, and still much more needs to be explored. Isolated techniques aim at unraveling the system that supports our cognition, nevertheless in order to provide solid evidence in such a field multimodal techniques have arisen, with them we will be able to improve current knowledge about human cognition. Here we focus on the multimodal integration of magnetoencephalography (MEG) and diffusion weighted magnetic resonance imaging. These techniques are sensitive to the magnetic fields emitted by the neuronal currents and to the white matter microstructure, respectively. The combination of such techniques could bring up evidences about structural-functional synergies in the brain information processing and which part of this synergy fails in specific neurological pathologies. In particular, we are interested in the relationship between functional and structural connectivity, and how two integrate this information. We quantify the functional connectivity by studying the phase synchronization or the amplitude correlation between time series obtained by MEG, and so we get an index indicating similarity between neuronal entities, i.e. brain regions. In addition we quantify structural connectivity by performing diffusion tensor estimation from the diffusion weighted images, thus obtaining an indicator of the integrity of the white matter or, if preferred, the strength of the structural connections between regions. These quantifications are then combined following three different approaches, from the lowest to the highest level of integration, in chapters 3, 4 and 5. We finally apply the fused information to the characterization or prediction of mild cognitive impairment, a clinical entity which is considered as an early step in the continuum pathological process of dementia. The dissertation is divided in six chapters. In chapter 1 I introduce connectomics within the fields of neuroimaging and neuroscience. Later in this chapter we describe the objectives of this thesis, and the specific objectives of each of the scientific publications that were produced as result of this work. In chapter 2 I describe the methods for each of the techniques that were employed, namely structural connectivity, resting state functional connectivity, complex brain networks and graph theory, and finally, I describe the clinical condition of mild cognitive impairment and the current state of the art in the search for early biomarkers. In chapters 3, 4 and 5 I have included the scientific publications that were generated along this work. They have been included in in their original format and they contain introduction, materials and methods, results and discussion. All methods that were employed in these papers have been described in chapter 2. Finally, in chapter 6 I summarize all the results from this thesis, both locally for each of the scientific publications and globally for the whole work.
Resumo:
Optical filters are crucial elements in optical communications. The influence of cascaded filters in the optical signal will affect the communications quality seriously. In this paper we will study and simulate the optical signal impairment caused by different kinds of filters which include Butterworth, Bessel, Fiber Bragg Grating (FBG) and Fabry-Perot (FP). Optical signal impairment is analyzed from an Eye Opening Penalty (EOP) and optical spectrum point of view. The simulation results show that when the center frequency of all filters aligns with the laser’s frequency, the Butterworth has the smallest influence to the signal while the F-P has the biggest. With a -1dB EOP, the amount of cascaded Butterworth optical filters with a bandwidth of 50 GHz is 18 in 40 Gbps NRZ-DQPSK systems and 12 in 100 Gbps PMNRZ- DQPSK systems. The value is reduced to 9 and 6 respectively for Febry-Perot optical filters. In the situation of frequency misalignment, the impairment caused by filters is more serious. Our research shows that with a frequency deviation of 5 GHz, only 12 and 9 Butterworth optical filters can be cascaded in 40 Gbps NRZ-DQPSK and 100 Gbps PM-NRZ-DQPSK systems respectively. We also study the signal impairment caused by different orders of the Butterworth filter model. Our study shows that although the higher-order has a smaller clipping effect in the transmission spectrum, it will introduce a more serious phase ripple which seriously affects the signal. Simulation result shows that the 2nd order Butterworth filter has the best performance.
Resumo:
We introduce the need for a distributed guideline-based decision sup-port (DSS) process, describe its characteristics, and explain how we implement-ed this process within the European Union?s MobiGuide project. In particular, we have developed a mechanism of sequential, piecemeal projection, i.e., 'downloading' small portions of the guideline from the central DSS server, to the local DSS in the patient's mobile device, which then applies that portion, us-ing the mobile device's local resources. The mobile device sends a callback to the central DSS when it encounters a triggering pattern predefined in the pro-jected module, which leads to an appropriate predefined action by the central DSS, including sending a new projected module, or directly controlling the rest of the workflow. We suggest that such a distributed architecture that explicitly defines a dialog between a central DSS server and a local DSS module, better balances the computational load and exploits the relative advantages of the cen-tral server and of the local mobile device.
Resumo:
Extracting opinions and emotions from text is becoming increasingly important, especially since the advent of micro-blogging and social networking. Opinion mining is particularly popular and now gathers many public services, datasets and lexical resources. Unfortunately, there are few available lexical and semantic resources for emotion recognition that could foster the development of new emotion aware services and applications. The diversity of theories of emotion and the absence of a common vocabulary are two of the main barriers to the development of such resources. This situation motivated the creation of Onyx, a semantic vocabulary of emotions with a focus on lexical resources and emotion analysis services. It follows a linguistic Linked Data approach, it is aligned with the Provenance Ontology, and it has been integrated with the Lexicon Model for Ontologies (lemon), a popular RDF model for representing lexical entries. This approach also means a new and interesting way to work with different theories of emotion. As part of this work, Onyx has been aligned with EmotionML and WordNet-Affect.
Resumo:
Vivemos um período de transformações políticas, econômicas, sociais e culturais que, a todo instante, nos impõe desafios. Neste contexto, nas últimas décadas, o uso da tecnologia tem sido ampliado na realização de diversas atividades cotidianas, na divulgação de informações, na comunicação, como forma de expressão e organização da sociedade. A escola, enquanto instituição social, precisa reconhecer esta nova realidade, esta diferente possibilidade de aquisição e transformação de saber, para que possa intervir, ressignificar e redirecionar sua ação, a fim de atender as demandas de seu tempo. O objetivo geral desta pesquisa, a partir da apresentação e análise de experiências realizadas com o uso de Tecnologias da Informação e Conhecimento, é o de refletir sobre como inserir estas ferramentas no processo de ensinar e aprender na escola a partir da visão de professores e alunos, visando a formação integral do educando. Deste modo, no desenvolvimento, entendemos como necessário conhecer e considerar o contexto histórico, bem como as perspectivas relacionadas a escola e seus protagonistas (professores e estudantes) na chamada Sociedade da Informação e do Conhecimento. Ressaltamos a importância do docente (sua formação) e seu papel de mediador nos processos de aprendizagem, assim como a recepção à tecnologia, observando função e espaço de atuação desta. Destacamos experiências com a utilização de TDIC, realizada por professores e alunos, como a produção de game, revistas científicas, escrita de histórias, produções artísticas, blogs, vlogs, discussões em grupos presentes em redes sociais. A metodologia utilizada nesta pesquisa é qualitativa, na modalidade de pesquisa-ação e narrativa, em função do envolvimento com o grupo e com as atividades desenvolvidas, nas quais os participantes compartilham com o pesquisador suas histórias pessoais e de aprendizagem relacionadas às ações ou às atividades que realiza, fornecendo informações e indícios relevantes sobre o seu processo de formação ao longo do tempo. A revisão de literatura foi realizada por meio de análise bibliográfica e documental em livros, teses, dissertações, periódicos específicos sobre o assunto, além de artigos publicados na Internet. A coleta de dados foi realizada a partir de conversas informais, entrevistas semiestruturadas e filmagem dos relatos. A análise foi realizada a partir da abordagem hermenêutico-fenomenológica, que busca descrever e interpretar fenômenos da experiência humana, a fim de investigar a essência por meio da identificação de temas. Os resultados apontam para a necessidade e possibilidade da ampliação da utilização de TDIC como recurso no processo de ensino e aprendizagem, por meio de formação, diálogo, interação, intencionalidade, expectativas, esperança e seus desdobramentos.
Resumo:
A Administração Financeira surge no início do século XIX juntamente com o movimento de consolidação das grandes empresas e a formação dos mercados nacionais americano enquanto que no Brasil os primeiros estudos ocorrem a partir da segunda metade do século XX. Desde entãoo país conseguiu consolidar alguns centros de excelência em pesquisa, formar grupo significativo de pesquisadores seniores e expandir as áreas de pesquisa no campo, contudo, ainda são poucos os trabalhos que buscam retratar as características da produtividade científica em Finanças. Buscando contribuir para a melhor compreensão do comportamento produtivo dessa área a presente pesquisa estuda sua produção científica, materializada na forma de artigos digitais, publicados em 24 conceituados periódicos nacionais classificados nos estratos Qualis/CAPES A2, B1 e B2 da Área de Administração, Ciências Contábeis e Turismo. Para tanto são aplicadas a Lei de Bradford, Lei do Elitismo de Price e Lei de Lotka. Pela Lei de Bradford são identificadas três zonas de produtividade sendo o núcleo formado por três revistas, estando uma delas classificada no estrato Qualis/CAPES B2, o que evidencia a limitação de um recorte tendo como único critério a classificação Qualis/CAPES. Para a Lei do Elitismo de Price, seja pela contagem direta ou completa, não identificamos comportamento de uma elite semelhante ao apontado pela teoria e que conta com grande número de autores com apenas uma publicação.Aplicando-se o modelo do Poder Inverso Generalizado, calculado por Mínimos Quadrados Ordinários (MQO), verificamos que produtividade dos pesquisadores, quando feita pela contagem direta, se adequa àquela definida pela Lei de Lotka ao nível de α = 0,01 de significância, contudo, pela contagem completa não podemos confirmar a hipótese de homogeneidade das distribuições, além do fato de que nas duas contagens a produtividade analisada pelo parâmetro n é maior que 2 e, portanto, a produtividade do pesquisadores de finanças é menor que a defendida pela teoria.
Resumo:
A prática do ioga tem se tornado cada vez mais popular, não apenas pelos benefícios físicos, mas principalmente pelo bem-estar psicológico trazido pela sua prática. Um dos componentes do ioga é o Prãnãyama, ou controle da respiração. A atenção e a respiração são dois mecanismos fisiológicos e involuntários requeridos para a execução do Prãnãyama. O principal objetivo desse estudo foi verificar se variáveis contínuas do EEG (potência de diferentes faixas que o compõem) seriam moduladas pelo controle respiratório, comparando-se separadamente as duas fases do ciclo respiratório (inspiração e expiração), na situação de respiração espontânea e controlada. Fizeram parte do estudo 19 sujeitos (7 homens/12 mulheres, idade média de 36,89 e DP = ± 14,46) que foram convidados a participar da pesquisa nas dependências da Faculdade de Saúde da Universidade Metodista de São Paulo. Para o registro do eletroencefalograma foi utilizado um sistema de posicionamento de cinco eletrodos Ag AgCl (FPz, Fz, Cz, Pz e Oz) fixados a uma touca de posicionamento rápido (Quick-Cap, Neuromedical Supplies®), em sistema 10-20. Foram obtidos valores de máxima amplitude de potência (espectro de potência no domínio da frequência) nas frequências teta, alfa e beta e delta e calculada a razão teta/beta nas diferentes fases do ciclo respiratório (inspiração e expiração), separadamente, nas condições de respiração espontânea e de controle respiratório. Para o registro do ciclo respiratório, foi utilizada uma cinta de esforço respiratório M01 (Pletismógrafo). Os resultados mostram diferenças significativas entre as condições de respiração espontânea e de controle com valores das médias da razão teta/beta menores na respiração controlada do que na respiração espontânea e valores de média da potência alfa sempre maiores no controle respiratório. Diferenças significativas foram encontradas na comparação entre inspiração e expiração da respiração controlada com diminuição dos valores das médias da razão teta/beta na inspiração e aumento nos valores das médias da potência alfa, sobretudo na expiração. Os achados deste estudo trazem evidências de que o controle respiratório modula variáveis eletrofisiológicas relativas à atenção refletindo um estado de alerta, porém mais relaxado do que na situação de respiração espontânea.
Resumo:
O trabalho tem como proposta avaliar a postura das organizações nas mídias sociais digitais, considerando o fato de que esses novos ambientes virtuais têm modificado drasticamente a maneira pela qual elas promovem o relacionamento com seus públicos estratégicos. O objetivo principal da pesquisa é identificar e compreender como as organizações se posicionam diante de comentários desfavoráveis nas mídias sociais digitais que possam impactar sua imagem e reputação, bem como mostrar a importância de monitorar constantemente o consumidor e dialogar com ele nos canais digitais para evitar riscos à marca. A metodologia aplicada denomina-se Estudo de Casos Múltiplos, por meio da qual analisaram-se os comentários desfavoráveis às marcas: Vivo, Tim e Oi, na página do Facebook, durante o mês de setembro de 2015. Construiu-se um protocolo de pesquisa, e realizou-se o acompanhamento dessas marcas analisando-lhes os posts e os comentários desfavoráveis coletados no período. Constatou-se, após tais procedimentos que as operadoras apresentam frequentemente dificuldades para se relacionar com os públicos nas mídias sociais digitais, o que as coloca em risco quanto à sua imagem e reputação.
Resumo:
A genetic annealing model for the universal ancestor of all extant life is presented; the name of the model derives from its resemblance to physical annealing. The scenario pictured starts when “genetic temperatures” were very high, cellular entities (progenotes) were very simple, and information processing systems were inaccurate. Initially, both mutation rate and lateral gene transfer levels were elevated. The latter was pandemic and pervasive to the extent that it, not vertical inheritance, defined the evolutionary dynamic. As increasingly complex and precise biological structures and processes evolved, both the mutation rate and the scope and level of lateral gene transfer, i.e., evolutionary temperature, dropped, and the evolutionary dynamic gradually became that characteristic of modern cells. The various subsystems of the cell “crystallized,” i.e., became refractory to lateral gene transfer, at different stages of “cooling,” with the translation apparatus probably crystallizing first. Organismal lineages, and so organisms as we know them, did not exist at these early stages. The universal phylogenetic tree, therefore, is not an organismal tree at its base but gradually becomes one as its peripheral branchings emerge. The universal ancestor is not a discrete entity. It is, rather, a diverse community of cells that survives and evolves as a biological unit. This communal ancestor has a physical history but not a genealogical one. Over time, this ancestor refined into a smaller number of increasingly complex cell types with the ancestors of the three primary groupings of organisms arising as a result.
Resumo:
Recent studies of corticofugal modulation of auditory information processing indicate that cortical neurons mediate both a highly focused positive feedback to subcortical neurons “matched” in tuning to a particular acoustic parameter and a widespread lateral inhibition to “unmatched” subcortical neurons. This cortical function for the adjustment and improvement of subcortical information processing is called egocentric selection. Egocentric selection enhances the neural representation of frequently occurring signals in the central auditory system. For our present studies performed with the big brown bat (Eptesicus fuscus), we hypothesized that egocentric selection adjusts the frequency map of the inferior colliculus (IC) according to auditory experience based on associative learning. To test this hypothesis, we delivered acoustic stimuli paired with electric leg stimulation to the bat, because such paired stimuli allowed the animal to learn that the acoustic stimulus was behaviorally important and to make behavioral and neural adjustments based on the acquired importance of the acoustic stimulus. We found that acoustic stimulation alone evokes a change in the frequency map of the IC; that this change in the IC becomes greater when the acoustic stimulation is made behaviorally relevant by pairing it with electrical stimulation; that the collicular change is mediated by the corticofugal system; and that the IC itself can sustain the change evoked by the corticofugal system for some time. Our data support the hypothesis.
Resumo:
Childhood exposure to low-level lead can permanently reduce intelligence, but the neurobiologic mechanism for this effect is unknown. We examined the impact of lead exposure on the development of cortical columns, using the rodent barrel field as a model. In all areas of mammalian neocortex, cortical columns constitute a fundamental structural unit subserving information processing. Barrel field cortex contains columnar processing units with distinct clusters of layer IV neurons that receive sensory input from individual whiskers. In this study, rat pups were exposed to 0, 0.2, 1, 1.5, or 2 g/liter lead acetate in their dam's drinking water from birth through postnatal day 10. This treatment, which coincides with the development of segregated columns in the barrel field, produced blood lead concentrations from 1 to 31 μg/dl. On postnatal day 10, the area of the barrel field and of individual barrels was measured. A dose-related reduction in barrel field area was observed (Pearson correlation = −0.740; P < 0.001); mean barrel field area in the highest exposure group was decreased 12% versus controls. Individual barrels in the physiologically more active caudoventral group were affected preferentially. Total cortical area measured in the same sections was not altered significantly by lead exposure. These data support the hypothesis that lead exposure may impair the development of columnar processing units in immature neocortex. We demonstrate that low levels of blood lead, in the range seen in many impoverished inner-city children, cause structural alterations in a neocortical somatosensory map.
Resumo:
Neocortex, a new and rapidly evolving brain structure in mammals, has a similar layered architecture in species over a wide range of brain sizes. Larger brains require longer fibers to communicate between distant cortical areas; the volume of the white matter that contains long axons increases disproportionally faster than the volume of the gray matter that contains cell bodies, dendrites, and axons for local information processing, according to a power law. The theoretical analysis presented here shows how this remarkable anatomical regularity might arise naturally as a consequence of the local uniformity of the cortex and the requirement for compact arrangement of long axonal fibers. The predicted power law with an exponent of 4/3 minus a small correction for the thickness of the cortex accurately accounts for empirical data spanning several orders of magnitude in brain sizes for various mammalian species, including human and nonhuman primates.
Resumo:
Motifs of neural circuitry seem surprisingly conserved over different areas of neocortex or of paleocortex, while performing quite different sensory processing tasks. This apparent paradox may be resolved by the fact that seemingly different problems in sensory information processing are related by transformations (changes of variables) that convert one problem into another. The same basic algorithm that is appropriate to the recognition of a known odor quality, independent of the strength of the odor, can be used to recognize a vocalization (e.g., a spoken syllable), independent of whether it is spoken quickly or slowly. To convert one problem into the other, a new representation of time sequences is needed. The time that has elapsed since a recent event must be represented in neural activity. The electrophysiological hallmarks of cells that are involved in generating such a representation of time are discussed. The anatomical relationships between olfactory and auditory pathways suggest relevant experiments. The neurophysiological mechanism for the psychophysical logarithmic encoding of time duration would be of direct use for interconverting olfactory and auditory processing problems. Such reuse of old algorithms in new settings and representations is related to the way that evolution develops new biochemistry.
Resumo:
In optimal foraging theory, search time is a key variable defining the value of a prey type. But the sensory-perceptual processes that constrain the search for food have rarely been considered. Here we evaluate the flight behavior of bumblebees (Bombus terrestris) searching for artificial flowers of various sizes and colors. When flowers were large, search times correlated well with the color contrast of the targets with their green foliage-type background, as predicted by a model of color opponent coding using inputs from the bees' UV, blue, and green receptors. Targets that made poor color contrast with their backdrop, such as white, UV-reflecting ones, or red flowers, took longest to detect, even though brightness contrast with the background was pronounced. When searching for small targets, bees changed their strategy in several ways. They flew significantly slower and closer to the ground, so increasing the minimum detectable area subtended by an object on the ground. In addition, they used a different neuronal channel for flower detection. Instead of color contrast, they used only the green receptor signal for detection. We relate these findings to temporal and spatial limitations of different neuronal channels involved in stimulus detection and recognition. Thus, foraging speed may not be limited only by factors such as prey density, flight energetics, and scramble competition. Our results show that understanding the behavioral ecology of foraging can substantially gain from knowledge about mechanisms of visual information processing.
Resumo:
Two and a half millennia ago Pythagoras initiated the scientific study of the pitch of sounds; yet our understanding of the mechanisms of pitch perception remains incomplete. Physical models of pitch perception try to explain from elementary principles why certain physical characteristics of the stimulus lead to particular pitch sensations. There are two broad categories of pitch-perception models: place or spectral models consider that pitch is mainly related to the Fourier spectrum of the stimulus, whereas for periodicity or temporal models its characteristics in the time domain are more important. Current models from either class are usually computationally intensive, implementing a series of steps more or less supported by auditory physiology. However, the brain has to analyze and react in real time to an enormous amount of information from the ear and other senses. How is all this information efficiently represented and processed in the nervous system? A proposal of nonlinear and complex systems research is that dynamical attractors may form the basis of neural information processing. Because the auditory system is a complex and highly nonlinear dynamical system, it is natural to suppose that dynamical attractors may carry perceptual and functional meaning. Here we show that this idea, scarcely developed in current pitch models, can be successfully applied to pitch perception.