908 resultados para Task-to-core mapping
Resumo:
One of the major challenges in evolutionary robotics is constituted by the need of the robot being able to make decisions on its own, in accordance with the multiple tasks programmed, optimizing its timings and power. In this paper, we present a new automatic decision making mechanism for a robot guide that allows the robot to make the best choice in order to reach its aims, performing its tasks in an optimal way. The election of which is the best alternative is based on a series of criteria and restrictions of the tasks to perform. The software developed in the project has been verified on the tour-guide robot Urbano. The most important aspect of this proposal is that the design uses learning as the means to optimize the quality in the decision making. The modeling of the quality index of the best choice to perform is made using fuzzy logic and it represents the beliefs of the robot, which continue to evolve in order to match the "external reality”. This fuzzy system is used to select the most appropriate set of tasks to perform during the day. With this tool, the tour guide-robot prepares its agenda daily, which satisfies the objectives and restrictions, and it identifies the best task to perform at each moment. This work is part of the ARABOT project of the Intelligent Control Research Group at the Universidad Politécnica de Madrid to create "awareness" in a robot guide.
Resumo:
Un escenario habitualmente considerado para el uso sostenible y prolongado de la energía nuclear contempla un parque de reactores rápidos refrigerados por metales líquidos (LMFR) dedicados al reciclado de Pu y la transmutación de actínidos minoritarios (MA). Otra opción es combinar dichos reactores con algunos sistemas subcríticos asistidos por acelerador (ADS), exclusivamente destinados a la eliminación de MA. El diseño y licenciamiento de estos reactores innovadores requiere herramientas computacionales prácticas y precisas, que incorporen el conocimiento obtenido en la investigación experimental de nuevas configuraciones de reactores, materiales y sistemas. A pesar de que se han construido y operado un cierto número de reactores rápidos a nivel mundial, la experiencia operacional es todavía reducida y no todos los transitorios se han podido entender completamente. Por tanto, los análisis de seguridad de nuevos LMFR están basados fundamentalmente en métodos deterministas, al contrario que las aproximaciones modernas para reactores de agua ligera (LWR), que se benefician también de los métodos probabilistas. La aproximación más usada en los estudios de seguridad de LMFR es utilizar una variedad de códigos, desarrollados a base de distintas teorías, en busca de soluciones integrales para los transitorios e incluyendo incertidumbres. En este marco, los nuevos códigos para cálculos de mejor estimación ("best estimate") que no incluyen aproximaciones conservadoras, son de una importancia primordial para analizar estacionarios y transitorios en reactores rápidos. Esta tesis se centra en el desarrollo de un código acoplado para realizar análisis realistas en reactores rápidos críticos aplicando el método de Monte Carlo. Hoy en día, dado el mayor potencial de recursos computacionales, los códigos de transporte neutrónico por Monte Carlo se pueden usar de manera práctica para realizar cálculos detallados de núcleos completos, incluso de elevada heterogeneidad material. Además, los códigos de Monte Carlo se toman normalmente como referencia para los códigos deterministas de difusión en multigrupos en aplicaciones con reactores rápidos, porque usan secciones eficaces punto a punto, un modelo geométrico exacto y tienen en cuenta intrínsecamente la dependencia angular de flujo. En esta tesis se presenta una metodología de acoplamiento entre el conocido código MCNP, que calcula la generación de potencia en el reactor, y el código de termohidráulica de subcanal COBRA-IV, que obtiene las distribuciones de temperatura y densidad en el sistema. COBRA-IV es un código apropiado para aplicaciones en reactores rápidos ya que ha sido validado con resultados experimentales en haces de barras con sodio, incluyendo las correlaciones más apropiadas para metales líquidos. En una primera fase de la tesis, ambos códigos se han acoplado en estado estacionario utilizando un método iterativo con intercambio de archivos externos. El principal problema en el acoplamiento neutrónico y termohidráulico en estacionario con códigos de Monte Carlo es la manipulación de las secciones eficaces para tener en cuenta el ensanchamiento Doppler cuando la temperatura del combustible aumenta. Entre todas las opciones disponibles, en esta tesis se ha escogido la aproximación de pseudo materiales, y se ha comprobado que proporciona resultados aceptables en su aplicación con reactores rápidos. Por otro lado, los cambios geométricos originados por grandes gradientes de temperatura en el núcleo de reactores rápidos resultan importantes para la neutrónica como consecuencia del elevado recorrido libre medio del neutrón en estos sistemas. Por tanto, se ha desarrollado un módulo adicional que simula la geometría del reactor en caliente y permite estimar la reactividad debido a la expansión del núcleo en un transitorio. éste módulo calcula automáticamente la longitud del combustible, el radio de la vaina, la separación de los elementos de combustible y el radio de la placa soporte en función de la temperatura. éste efecto es muy relevante en transitorios sin inserción de bancos de parada. También relacionado con los cambios geométricos, se ha implementado una herramienta que, automatiza el movimiento de las barras de control en busca d la criticidad del reactor, o bien calcula el valor de inserción axial las barras de control. Una segunda fase en la plataforma de cálculo que se ha desarrollado es la simulació dinámica. Puesto que MCNP sólo realiza cálculos estacionarios para sistemas críticos o supercríticos, la solución más directa que se propone sin modificar el código fuente de MCNP es usar la aproximación de factorización de flujo, que resuelve por separado la forma del flujo y la amplitud. En este caso se han estudiado en profundidad dos aproximaciones: adiabática y quasiestática. El método adiabático usa un esquema de acoplamiento que alterna en el tiempo los cálculos neutrónicos y termohidráulicos. MCNP calcula el modo fundamental de la distribución de neutrones y la reactividad al final de cada paso de tiempo, y COBRA-IV calcula las propiedades térmicas en el punto intermedio de los pasos de tiempo. La evolución de la amplitud de flujo se calcula resolviendo las ecuaciones de cinética puntual. Este método calcula la reactividad estática en cada paso de tiempo que, en general, difiere de la reactividad dinámica que se obtendría con la distribución de flujo exacta y dependiente de tiempo. No obstante, para entornos no excesivamente alejados de la criticidad ambas reactividades son similares y el método conduce a resultados prácticos aceptables. Siguiendo esta línea, se ha desarrollado después un método mejorado para intentar tener en cuenta el efecto de la fuente de neutrones retardados en la evolución de la forma del flujo durante el transitorio. El esquema consiste en realizar un cálculo cuasiestacionario por cada paso de tiempo con MCNP. La simulación cuasiestacionaria se basa EN la aproximación de fuente constante de neutrones retardados, y consiste en dar un determinado peso o importancia a cada ciclo computacial del cálculo de criticidad con MCNP para la estimación del flujo final. Ambos métodos se han verificado tomando como referencia los resultados del código de difusión COBAYA3 frente a un ejercicio común y suficientemente significativo. Finalmente, con objeto de demostrar la posibilidad de uso práctico del código, se ha simulado un transitorio en el concepto de reactor crítico en fase de diseño MYRRHA/FASTEF, de 100 MW de potencia térmica y refrigerado por plomo-bismuto. ABSTRACT Long term sustainable nuclear energy scenarios envisage a fleet of Liquid Metal Fast Reactors (LMFR) for the Pu recycling and minor actinides (MAs) transmutation or combined with some accelerator driven systems (ADS) just for MAs elimination. Design and licensing of these innovative reactor concepts require accurate computational tools, implementing the knowledge obtained in experimental research for new reactor configurations, materials and associated systems. Although a number of fast reactor systems have already been built, the operational experience is still reduced, especially for lead reactors, and not all the transients are fully understood. The safety analysis approach for LMFR is therefore based only on deterministic methods, different from modern approach for Light Water Reactors (LWR) which also benefit from probabilistic methods. Usually, the approach adopted in LMFR safety assessments is to employ a variety of codes, somewhat different for the each other, to analyze transients looking for a comprehensive solution and including uncertainties. In this frame, new best estimate simulation codes are of prime importance in order to analyze fast reactors steady state and transients. This thesis is focused on the development of a coupled code system for best estimate analysis in fast critical reactor. Currently due to the increase in the computational resources, Monte Carlo methods for neutrons transport can be used for detailed full core calculations. Furthermore, Monte Carlo codes are usually taken as reference for deterministic diffusion multigroups codes in fast reactors applications because they employ point-wise cross sections in an exact geometry model and intrinsically account for directional dependence of the ux. The coupling methodology presented here uses MCNP to calculate the power deposition within the reactor. The subchannel code COBRA-IV calculates the temperature and density distribution within the reactor. COBRA-IV is suitable for fast reactors applications because it has been validated against experimental results in sodium rod bundles. The proper correlations for liquid metal applications have been added to the thermal-hydraulics program. Both codes are coupled at steady state using an iterative method and external files exchange. The main issue in the Monte Carlo/thermal-hydraulics steady state coupling is the cross section handling to take into account Doppler broadening when temperature rises. Among every available options, the pseudo materials approach has been chosen in this thesis. This approach obtains reasonable results in fast reactor applications. Furthermore, geometrical changes caused by large temperature gradients in the core, are of major importance in fast reactor due to the large neutron mean free path. An additional module has therefore been included in order to simulate the reactor geometry in hot state or to estimate the reactivity due to core expansion in a transient. The module automatically calculates the fuel length, cladding radius, fuel assembly pitch and diagrid radius with the temperature. This effect will be crucial in some unprotected transients. Also related to geometrical changes, an automatic control rod movement feature has been implemented in order to achieve a just critical reactor or to calculate control rod worth. A step forward in the coupling platform is the dynamic simulation. Since MCNP performs only steady state calculations for critical systems, the more straight forward option without modifying MCNP source code, is to use the flux factorization approach solving separately the flux shape and amplitude. In this thesis two options have been studied to tackle time dependent neutronic simulations using a Monte Carlo code: adiabatic and quasistatic methods. The adiabatic methods uses a staggered time coupling scheme for the time advance of neutronics and the thermal-hydraulics calculations. MCNP computes the fundamental mode of the neutron flux distribution and the reactivity at the end of each time step and COBRA-IV the thermal properties at half of the the time steps. To calculate the flux amplitude evolution a solver of the point kinetics equations is used. This method calculates the static reactivity in each time step that in general is different from the dynamic reactivity calculated with the exact flux distribution. Nevertheless, for close to critical situations, both reactivities are similar and the method leads to acceptable practical results. In this line, an improved method as an attempt to take into account the effect of delayed neutron source in the transient flux shape evolutions is developed. The scheme performs a quasistationary calculation per time step with MCNP. This quasistationary simulations is based con the constant delayed source approach, taking into account the importance of each criticality cycle in the final flux estimation. Both adiabatic and quasistatic methods have been verified against the diffusion code COBAYA3, using a theoretical kinetic exercise. Finally, a transient in a critical 100 MWth lead-bismuth-eutectic reactor concept is analyzed using the adiabatic method as an application example in a real system.
Resumo:
The difficulty behind Wireless Sensor Network deployments in industrial environments not only resides in the number of nodes or the communication protocols but also in the real location of the sensor nodes and the parameters to be monitored. Sensor soiling, high humidity and unreachable locations, among others, make real deployments a very difficult task to plan. Even though it is possible to find myriad approaches for floor planners and deployment tools in the state of the art, most of these problems are very difficult to model and foresee before actually deploying the network in the final scenario. This work shows two real deployments in food factories and how their problems are found and overcome.
Resumo:
Human ability to switch from one cognitive task to another involves both endogenous preparation without an external stimulus and exogenous adjustment in response to the external stimulus. In an event-related functional MRI study, participants performed pairs of two tasks that are either the same (task repetition) or different (task switch) from each other. On half of the trials, foreknowledge about task repetition or task switch was available. On the other half, it was not. Endogenous preparation seems to involve lateral prefrontal cortex (BA 46/45) and posterior parietal cortex (BA 40). During preparation, higher activation increases in inferior lateral prefrontal cortex and superior posterior parietal cortex were associated with foreknowledge than with no foreknowledge. Exogenous adjustment seems to involve superior prefrontal cortex (BA 8) and posterior parietal cortex (BA 39/40) in general. During a task switch with no foreknowledge, activations in these areas were relatively higher than during a task repetition with no foreknowledge. These results suggest that endogenous preparation and exogenous adjustment for a task switch may be independent processes involving different brain areas.
Resumo:
Local protein structure prediction efforts have consistently failed to exceed approximately 70% accuracy. We characterize the degeneracy of the mapping from local sequence to local structure responsible for this failure by investigating the extent to which similar sequence segments found in different proteins adopt similar three-dimensional structures. Sequence segments 3-15 residues in length from 154 different protein families are partitioned into neighborhoods containing segments with similar sequences using cluster analysis. The consistency of the sequence-to-structure mapping is assessed by comparing the local structures adopted by sequence segments in the same neighborhood in proteins of known structure. In the 154 families, 45% and 28% of the positions occur in neighborhoods in which one and two local structures predominate, respectively. The sequence patterns that characterize the neighborhoods in the first class probably include virtually all of the short sequence motifs in proteins that consistently occur in a particular local structure. These patterns, many of which occur in transitions between secondary structural elements, are an interesting combination of previously studied and novel motifs. The identification of sequence patterns that consistently occur in one or a small number of local structures in proteins should contribute to the prediction of protein structure from sequence.
Resumo:
A precise and rapid method for identifying sites of interaction between proteins was demonstrated; the basis of the method is direct mass spectrometric readout from the complex to determine the specific components of the proteins that interact--a method termed affinity-directed mass spectrometry. The strategy was used to define the region of interaction of a protein growth factor with a monoclonal antibody. A combination of proteolytic digestion and affinity-directed mass spectrometry was used to rapidly determine the approximate location of a continuous binding epitope within the growth factor. The precise boundaries of the binding epitope were determined by affinity-directed mass spectrometric analysis of sets of synthetic peptide ladders that span the approximate binding region. In addition to the mapping of such linear epitopes, affinity-directed mass spectrometry can be applied to the mapping of other types of molecule-molecule contacts, including ligand-receptor and protein-oligonucleotide interactions.
Resumo:
We have developed a surface mounting technology for the rapid construction of ordered restriction maps from individual DNA molecules. Optical restriction maps constructed from yeast artificial chromosome DNA molecules mounted on specially derivatized glass surfaces are accurate and reproducible, and the technology is amenable to automation. The mounting procedures described here should also be useful for fluorescence in situ hybridization studies. We believe these improvements to optical mapping will further stimulate the development of nonelectrophoretic approaches to genome analysis.
Resumo:
[From the Introduction]. Information gives knowledge and knowledge gives power. Though in all EC Member States, the task to protect the environment is given to the administration, it is obvious that the administration is not the owner of the environment. The environment is everybody's. It is for this reason that administrative decisions which affect the environment must be transparent, open and must strike a balance between the general interest to preserve, protect and improve the quality of the environment on the one hand, the satisfying of specific private or public interests on the other hand. In order to allow at least a certain control of whether the administration strikes the right balance between the need to protect the environment and other legitimate or less legitimate needs, it appears normal and self-evident that information on the environment which is in the hands of public authorities, be also made available to the public and to citizens.
Resumo:
The European Commission’s Action Plan consists, in a nutshell, of a short list of technical proposals and a longer one of (rather general) potential actions. Overall, the plan indeed proposes to achieve some short-term objectives, such as a reduction of listing costs for SMEs, but it lacks long-term vision. The plan bundles actions under rather generic objectives of long-term finance or cross-border investing. Improving the informational infrastructure (e.g. accounting standards, company data) and cross-border enforcement of rules is left to vaguely defined future actions, but these constitute the core of the capital markets infrastructure. Without a well-defined set of measurable objectives, the whole plan may lose political momentum and become an opportunity for interested parties to cherry pick their pet provisions. Building a single market, i.e. removing cross-border obstacles to capital circulation, is too challenging a task to simply appear as one of many items on a long list of general objectives, which incidentally do not include institutional reform. The ultimate risk is that the Commission may just miss a unique opportunity to revamp and improve the financial integration process in Europe after almost a decade of harmful financial retrenchment.
Resumo:
The capillary-pressure characteristics of 22 samples of lithified post-Paleozoic Indian-Ocean carbonates were compared to published data from older carbonate rocks (lower Paleozoic Hunton Group of Texas and Oklahoma). The Indian-Ocean samples are considerably more porous than are the Paleozoic samples, yet all of the Indian-Ocean samples fit readily into a descriptive petrofacies scheme previously established for the Hunton Group. The Indian-Ocean samples may be assigned to four petrophysical facies (petrofacies) based on the shapes of their capillary-pressure curves, their pore-throat-size distributions, their estimated recovery efficiency values (for nonwetting fluids), and the visual characteristics of their pore systems, as observed with a scanning-electron microscope. Petrofacies assignments for the Indian-Ocean samples are as follows. Petrofacies I includes six samples collected from the coarse basal portions of event deposits (primarily turbidites). These samples have large throats, leptokurtic throat-size distributions, low- to moderate recovery efficiency values, concave cumulative-intrusion capillary-pressure curves, and high porosity values. Petrofacies II includes two sedimentologically dissimilar samples that have medium-size throats, platykurtic throat-size distributions, moderate- to-high recovery efficiency values, gently sloping cumulative-intrusion capillary-pressure curves, and high porosity values. Petrofacies III includes two polymictic sandstones and a skeletal packstone that have small throats, polymodal throat-size distributions, moderate recovery efficiency values, gently sloping cumulative-intrusion capillary-pressure curves, and high porosity values. Petrofacies IV includes 11 samples, mostly recrystallized neritic carbonates, that have small throats, leptokurtic throat-size distributions, high recovery efficiency values, convex cumulative-intrusion capillary-pressure curves, and low porosity values. Comparison of petrofacies assignment to core-, thin-section-, and smear-slide data, and to inferred depositional setting, suggests that pore systems in most samples from Holes 765C and 766A result from primary depositional features, whereas pore systems in samples from Hole 761C and one sample from Hole 765C have been strongly influenced by diagenetic processes. For Hole 761C, prediction of petrophysical parameters should be most successful if based on diagenetic facies patterns. By contrast, the distribution of favorable reservoir facies and of permeability barriers in less highly altered rocks collected from Holes 765C and 766A is related to depositional patterns. Recovery efficiency is inversely related to both porosity and median throat size for the present data set. This relationship is similar to that observed for carbonates of the lower Paleozoic Hunton Group and the Ordovician Ellenburger dolomite, but opposite of that observed for some other ancient carbonates. The coarse deposits of the massive basal units of turbidites are petrophysically distinct and form a coherent petrophysical group (Petrofacies I) with substantial reservoir potential. Two samples assigned to Petrofacies I have extremely large throats (median throat size at least 4 ?m, and at least six times that of any other sample) and therefore high permeability values. These two samples come from thin, coarse turbidites that lack or have poorly developed fine divisions and are interpreted to have been deposited on channeled suprafan lobes in a proximal mid-fan setting. The restriction of extremely high permeability values to a single depositional facies suggests that careful facies mapping of deep-sea fans in a deliberate search for such coarse turbidites could dramatically enhance the success of exploration for aquifers or hydrocarbon reservoirs. Such reservoirs should have substantial vertical heterogeneity. They should have high lateral permeability values but low vertical permeability values, and reservoir sections should include numerous thin units having widely differing petrophysical characteristics.
Resumo:
The aim of this research is to provide insight into how middle school learners experience an inclusive multicultural learning environment. Increasing diversity is challenging European educational systems, which have the arduous task to foster inclusion of learners with diverse educational needs. In order to explore the participants’ descriptions, a qualitative approach based on semi-structured interviews with six learners was employed. Learners’ positions in the educational scenery are central and unique; they are the main experts on their own situations and therefore precious contributors to educational research. Results have been discussed according to a sociocultural perspective. The analysis of my data suggests that the learners perceive their inclusive environment as beneficial. Moreover, they perceive their cultural diversity as strength, reckon social interaction and teamwork with peers as favorable conditions for learning, feel competent in multicultural communication and believe that respect and acceptance towards others are necessary common values. Some implications of multiculturalism in special education are discussed according to the results of a recent European study, which shows that in all the participating European countries, Sweden included, there is a consistent discrepancy in the proportions of learners with immigrant background within special education. Assessment methods developed for mono-cultural learners appear to be a valid reason why multicultural learners are over-or under-represented in special education. Research also shows that inclusion of diversity in educational environment enables the development of social skills in all learners.
Resumo:
Studies examining dual adaptation to opposing novel environments have yielded contradictory results, with previous evidence supporting both successful dual adaptation and interference leading to poorer adaptive performance. Whether or not interference is observed during dual adaptation appears to be dependent on the method used to allow the performer of the task to distinguish between two novel environments. This experiment tested if colour cues, a separation in workspace, and presentation schedule, could be used to distinguish between two opposing visuomotor rotations and enable dual adaptation. Through the use of a purpose designed manipulandum, each visuomotor rotation was either presented in the same region of workspace and associated with colour cues (Group 1), different regions of workspace in addition to colour cues (Groups 2 and 3) or different regions of workspace only (Groups 4 and 5). We also assessed the effectiveness of the workspace separation with both randomised and alternating presentation schedules (Groups 4 and 5). The results indicated that colour cues were not effective at enabling dual adaptation when each of the visuomotor rotations was associated with the same region of workspace. When associated with different regions of workspace, however, dual adaptation to the opposing rotations was successful regardless of whether colour cues were present or the type of presentation schedule.
Resumo:
In this paper we discuss a fast Bayesian extension to kriging algorithms which has been used successfully for fast, automatic mapping in emergency conditions in the Spatial Interpolation Comparison 2004 (SIC2004) exercise. The application of kriging to automatic mapping raises several issues such as robustness, scalability, speed and parameter estimation. Various ad-hoc solutions have been proposed and used extensively but they lack a sound theoretical basis. In this paper we show how observations can be projected onto a representative subset of the data, without losing significant information. This allows the complexity of the algorithm to grow as O(n m 2), where n is the total number of observations and m is the size of the subset of the observations retained for prediction. The main contribution of this paper is to further extend this projective method through the application of space-limited covariance functions, which can be used as an alternative to the commonly used covariance models. In many real world applications the correlation between observations essentially vanishes beyond a certain separation distance. Thus it makes sense to use a covariance model that encompasses this belief since this leads to sparse covariance matrices for which optimised sparse matrix techniques can be used. In the presence of extreme values we show that space-limited covariance functions offer an additional benefit, they maintain the smoothness locally but at the same time lead to a more robust, and compact, global model. We show the performance of this technique coupled with the sparse extension to the kriging algorithm on synthetic data and outline a number of computational benefits such an approach brings. To test the relevance to automatic mapping we apply the method to the data used in a recent comparison of interpolation techniques (SIC2004) to map the levels of background ambient gamma radiation. © Springer-Verlag 2007.
Resumo:
The ERS-1 Satellite was launched in July 1991 by the European Space Agency into a polar orbit at about 800 km, carrying a C-band scatterometer. A scatterometer measures the amount of backscatter microwave radiation reflected by small ripples on the ocean surface induced by sea-surface winds, and so provides instantaneous snap-shots of wind flow over large areas of the ocean surface, known as wind fields. Inherent in the physics of the observation process is an ambiguity in wind direction; the scatterometer cannot distinguish if the wind is blowing toward or away from the sensor device. This ambiguity implies that there is a one-to-many mapping between scatterometer data and wind direction. Current operational methods for wind field retrieval are based on the retrieval of wind vectors from satellite scatterometer data, followed by a disambiguation and filtering process that is reliant on numerical weather prediction models. The wind vectors are retrieved by the local inversion of a forward model, mapping scatterometer observations to wind vectors, and minimising a cost function in scatterometer measurement space. This thesis applies a pragmatic Bayesian solution to the problem. The likelihood is a combination of conditional probability distributions for the local wind vectors given the scatterometer data. The prior distribution is a vector Gaussian process that provides the geophysical consistency for the wind field. The wind vectors are retrieved directly from the scatterometer data by using mixture density networks, a principled method to model multi-modal conditional probability density functions. The complexity of the mapping and the structure of the conditional probability density function are investigated. A hybrid mixture density network, that incorporates the knowledge that the conditional probability distribution of the observation process is predominantly bi-modal, is developed. The optimal model, which generalises across a swathe of scatterometer readings, is better on key performance measures than the current operational model. Wind field retrieval is approached from three perspectives. The first is a non-autonomous method that confirms the validity of the model by retrieving the correct wind field 99% of the time from a test set of 575 wind fields. The second technique takes the maximum a posteriori probability wind field retrieved from the posterior distribution as the prediction. For the third technique, Markov Chain Monte Carlo (MCMC) techniques were employed to estimate the mass associated with significant modes of the posterior distribution, and make predictions based on the mode with the greatest mass associated with it. General methods for sampling from multi-modal distributions were benchmarked against a specific MCMC transition kernel designed for this problem. It was shown that the general methods were unsuitable for this application due to computational expense. On a test set of 100 wind fields the MAP estimate correctly retrieved 72 wind fields, whilst the sampling method correctly retrieved 73 wind fields.
Resumo:
Patients with Bipolar Disorder (BD) perform poorly on tasks of selective attention and inhibitory control. Although similar behavioural deficits have been noted in their relatives, it is yet unclear whether they reflect dysfunction in the same neural circuits. We used functional magnetic resonance imaging and the Stroop Colour Word Task to compare task related neural activity between 39 euthymic BD patients, 39 of their first-degree relatives (25 with no Axis I disorders and 14 with Major Depressive Disorder) and 48 healthy controls. Compared to controls, all individuals with familial predisposition to BD, irrespective of diagnosis, showed similar reductions in neural responsiveness in regions involved in selective attention within the posterior and inferior parietal lobules. In contrast, hypoactivation within fronto-striatal regions, implicated in inhibitory control, was observed only in BD patients and MDD relatives. Although striatal deficits were comparable between BD patients and their MDD relatives, right ventrolateral prefrontal dysfunction was uniquely associated with BD. Our findings suggest that while reduced parietal engagement relates to genetic risk, fronto-striatal dysfunction reflects processes underpinning disease expression for mood disorders. © 2011 Elsevier Inc.