895 resultados para Reactive power sources


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Renewable energy sources are believed to reduce drastically greenhouse gas emissions that would otherwise be generated from fossil fuels used to generate electricity. This implies that a unit of renewable energy will replace a unit of fossil-fuel, with its CO2 emissions, on an equivalent basis (with no other effects on the grid). But, the fuel economy and emissions in the existing power systems are not proportional with the electricity production of intermittent sources due to cycling of the fossil fuel plants that make up the balance of the grid (i.e. changing the power output makes thermal units to operate less efficiently). This study focuses in the interactions between wind generation and thermal plants cycling, by establishing the levels of extra fuel use caused by decreased efficiencies of fossil back-up for wind electricity in Spain. We analyze the production of all thermal plants in 2011, studying different scenarios where wind penetration causes major deviations in programming, while we define a procedure for quantifying the carbon reductions by using emission factors and efficiency curves from the existing installations. The objectives are to discuss the real contributions of renewable energies to the environmental targets as well as suggest alternatives that would improve the reliability of future power systems.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Helium Brayton cycles have been studied as power cycles for both fission and fusion reactors obtaining high thermal efficiency. This paper studies several technological schemes of helium Brayton cycles applied for the HiPER reactor proposal. Since HiPER integrates technologies available at short term, its working conditions results in a very low maximum temperature of the energy sources, something that limits the thermal performance of the cycle. The aim of this work is to analyze the potential of the helium Brayton cycles as power cycles for HiPER. Several helium Brayton cycle configurations have been investigated with the purpose of raising the cycle thermal efficiency under the working conditions of HiPER. The effects of inter-cooling and reheating have specifically been studied. Sensitivity analyses of the key cycle parameters and component performances on the maximum thermal efficiency have also been carried out. The addition of several inter-cooling stages in a helium Brayton cycle has allowed obtaining a maximum thermal efficiency of over 36%, and the inclusion of a reheating process may also yield an added increase of nearly 1 percentage point to reach 37%. These results confirm that helium Brayton cycles are to be considered among the power cycle candidates for HiPER.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

El requerimiento de proveer alta frecuencia de datos en los modernos sistema de comunicación inalámbricos resulta en complejas señales moduladas de radio-frequencia (RF) con un gran ancho de banda y alto ratio pico-promedio (PAPR). Para garantizar la linealidad del comportamiento, los amplificadores lineales de potencia comunes funcionan típicamente entre 4 y 10 dB de back-o_ desde la máxima potencia de salida, ocasionando una baja eficiencia del sistema. La eliminación y restauración de la evolvente (EER) y el seguimiento de la evolvente (ET) son dos prometedoras técnicas para resolver el problema de la eficiencia. Tanto en EER como en ET, es complicado diseñar un amplificador de potencia que sea eficiente para señales de RF de alto ancho de banda y alto PAPR. Una propuesta común para los amplificadores de potencia es incluir un convertidor de potencia de muy alta eficiencia operando a frecuencias más altas que el ancho de banda de la señal RF. En este caso, la potencia perdida del convertidor ocasionado por la alta frecuencia desaconseja su práctica cuando el ancho de banda es muy alto. La solución a este problema es el enfoque de esta disertación que presenta dos arquitecturas de amplificador evolvente: convertidor híbrido-serie con una técnica de evolvente lenta y un convertidor multinivel basado en un convertidor reductor multifase con control de tiempo mínimo. En la primera arquitectura, una topología híbrida está compuesta de una convertidor reductor conmutado y un regulador lineal en serie que trabajan juntos para ajustar la tensión de salida para seguir a la evolvente con precisión. Un algoritmo de generación de una evolvente lenta crea una forma de onda con una pendiente limitada que es menor que la pendiente máxima de la evolvente original. La salida del convertidor reductor sigue esa forma de onda en vez de la evolvente original usando una menor frecuencia de conmutación, porque la forma de onda no sólo tiene una pendiente reducida sino también un menor ancho de banda. De esta forma, el regulador lineal se usa para filtrar la forma de onda tiene una pérdida de potencia adicional. Dependiendo de cuánto se puede reducir la pendiente de la evolvente para producir la forma de onda, existe un trade-off entre la pérdida de potencia del convertidor reductor relacionada con la frecuencia de conmutación y el regulador lineal. El punto óptimo referido a la menor pérdida de potencia total del amplificador de evolvente es capaz de identificarse con la ayuda de modelo preciso de pérdidas que es una combinación de modelos comportamentales y analíticos de pérdidas. Además, se analiza el efecto en la respuesta del filtro de salida del convertidor reductor. Un filtro de dampeo paralelo extra es necesario para eliminar la oscilación resonante del filtro de salida porque el convertidor reductor opera en lazo abierto. La segunda arquitectura es un amplificador de evolvente de seguimiento de tensión multinivel. Al contrario que los convertidores que usan multi-fuentes, un convertidor reductor multifase se emplea para generar la tensión multinivel. En régimen permanente, el convertidor reductor opera en puntos del ciclo de trabajo con cancelación completa del rizado. El número de niveles de tensión es igual al número de fases de acuerdo a las características del entrelazamiento del convertidor reductor. En la transición, un control de tiempo mínimo (MTC) para convertidores multifase es novedosamente propuesto y desarrollado para cambiar la tensión de salida del convertidor reductor entre diferentes niveles. A diferencia de controles convencionales de tiempo mínimo para convertidores multifase con inductancia equivalente, el propuesto MTC considera el rizado de corriente por cada fase basado en un desfase fijo que resulta en diferentes esquemas de control entre las fases. La ventaja de este control es que todas las corrientes vuelven a su fase en régimen permanente después de la transición para que la siguiente transición pueda empezar muy pronto, lo que es muy favorable para la aplicación de seguimiento de tensión multinivel. Además, el control es independiente de la carga y no es afectado por corrientes de fase desbalanceadas. Al igual que en la primera arquitectura, hay una etapa lineal con la misma función, conectada en serie con el convertidor reductor multifase. Dado que tanto el régimen permanente como el estado de transición del convertidor no están fuertemente relacionados con la frecuencia de conmutación, la frecuencia de conmutación puede ser reducida para el alto ancho de banda de la evolvente, la cual es la principal consideración de esta arquitectura. La optimización de la segunda arquitectura para más alto anchos de banda de la evolvente es presentada incluyendo el diseño del filtro de salida, la frecuencia de conmutación y el número de fases. El área de diseño del filtro está restringido por la transición rápida y el mínimo pulso del hardware. La rápida transición necesita un filtro pequeño pero la limitación del pulso mínimo del hardware lleva el diseño en el sentido contrario. La frecuencia de conmutación del convertidor afecta principalmente a la limitación del mínimo pulso y a las pérdidas de potencia. Con una menor frecuencia de conmutación, el ancho de pulso en la transición es más pequeño. El número de fases relativo a la aplicación específica puede ser optimizado en términos de la eficiencia global. Otro aspecto de la optimización es mejorar la estrategia de control. La transición permite seguir algunas partes de la evolvente que son más rápidas de lo que el hardware puede soportar al precio de complejidad. El nuevo método de sincronización de la transición incrementa la frecuencia de la transición, permitiendo que la tensión multinivel esté más cerca de la evolvente. Ambas estrategias permiten que el convertidor pueda seguir una evolvente con un ancho de banda más alto que la limitación de la etapa de potencia. El modelo de pérdidas del amplificador de evolvente se ha detallado y validado mediante medidas. El mecanismo de pérdidas de potencia del convertidor reductor tiene que incluir las transiciones en tiempo real, lo cual es diferente del clásico modelos de pérdidas de un convertidor reductor síncrono. Este modelo estima la eficiencia del sistema y juega un papel muy importante en el proceso de optimización. Finalmente, la segunda arquitectura del amplificador de evolvente se integra con el amplificador de clase F. La medida del sistema EER prueba el ahorro de energía con el amplificador de evolvente propuesto sin perjudicar la linealidad del sistema. ABSTRACT The requirement of delivering high data rates in modern wireless communication systems results in complex modulated RF signals with wide bandwidth and high peak-to-average ratio (PAPR). In order to guarantee the linearity performance, the conventional linear power amplifiers typically work at 4 to 10 dB back-off from the maximum output power, leading to low system efficiency. The envelope elimination and restoration (EER) and envelope tracking (ET) are two promising techniques to overcome the efficiency problem. In both EER and ET, it is challenging to design efficient envelope amplifier for wide bandwidth and high PAPR RF signals. An usual approach for envelope amplifier includes a high-efficiency switching power converter operating at a frequency higher than the RF signal's bandwidth. In this case, the power loss of converter caused by high switching operation becomes unbearable for system efficiency when signal bandwidth is very wide. The solution of this problem is the focus of this dissertation that presents two architectures of envelope amplifier: a hybrid series converter with slow-envelope technique and a multilevel converter based on a multiphase buck converter with the minimum time control. In the first architecture, a hybrid topology is composed of a switched buck converter and a linear regulator in series that work together to adjust the output voltage to track the envelope with accuracy. A slow envelope generation algorithm yields a waveform with limited slew rate that is lower than the maximum slew rate of the original envelope. The buck converter's output follows this waveform instead of the original envelope using lower switching frequency, because the waveform has not only reduced slew rate but also reduced bandwidth. In this way, the linear regulator used to filter the waveform has additional power loss. Depending on how much reduction of the slew rate of envelope in order to obtain that waveform, there is a trade-off between the power loss of buck converter related to the switching frequency and the power loss of linear regulator. The optimal point referring to the lowest total power loss of this envelope amplifier is identified with the help of a precise power loss model that is a combination of behavioral and analytic loss model. In addition, the output filter's effect on the response is analyzed. An extra parallel damping filter is needed to eliminate the resonant oscillation of output filter L and C, because the buck converter operates in open loop. The second architecture is a multilevel voltage tracking envelope amplifier. Unlike the converters using multi-sources, a multiphase buck converter is employed to generate the multilevel voltage. In the steady state, the buck converter operates at complete ripple cancellation points of duty cycle. The number of the voltage levels is equal to the number of phases according the characteristics of interleaved buck converter. In the transition, a minimum time control (MTC) for multiphase converter is originally proposed and developed for changing the output voltage of buck converter between different levels. As opposed to conventional minimum time control for multiphase converter with equivalent inductance, the proposed MTC considers the current ripple of each phase based on the fixed phase shift resulting in different control schemes among the phases. The advantage of this control is that all the phase current return to the steady state after the transition so that the next transition can be triggered very soon, which is very favorable for the application of multilevel voltage tracking. Besides, the control is independent on the load condition and not affected by the unbalance of phase current. Like the first architecture, there is also a linear stage with the same function, connected in series with the multiphase buck converter. Since both steady state and transition state of the converter are not strongly related to the switching frequency, it can be reduced for wide bandwidth envelope which is the main consideration of this architecture. The optimization of the second architecture for wider bandwidth envelope is presented including the output filter design, switching frequency and the number of phases. The filter design area is restrained by fast transition and the minimum pulse of hardware. The fast transition needs small filter but the minimum pulse of hardware limitation pushes the filter in opposite way. The converter switching frequency mainly affects the minimum pulse limitation and the power loss. With lower switching frequency, the pulse width in the transition is smaller. The number of phases related to specific application can be optimized in terms of overall efficiency. Another aspect of optimization is improving control strategy. Transition shift allows tracking some parts of envelope that are faster than the hardware can support at the price of complexity. The new transition synchronization method increases the frequency of transition, allowing the multilevel voltage to be closer to the envelope. Both control strategies push the converter to track wider bandwidth envelope than the limitation of power stage. The power loss model of envelope amplifier is detailed and validated by measurements. The power loss mechanism of buck converter has to include the transitions in real time operation, which is different from classical power loss model of synchronous buck converter. This model estimates the system efficiency and play a very important role in optimization process. Finally, the second envelope amplifier architecture is integrated with a Class F amplifier. EER system measurement proves the power saving with the proposed envelope amplifier without disrupting the linearity performance.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Data centers are easily found in every sector of the worldwide economy. They are composed of thousands of servers, serving millions of users globally and 24-7. In the last years, e-Science applications such e-Health or Smart Cities have experienced a significant development. The need to deal efficiently with the computational needs of next-generation applications together with the increasing demand for higher resources in traditional applications has facilitated the rapid proliferation and growing of Data Centers. A drawback to this capacity growth has been the rapid increase of the energy consumption of these facilities. In 2010, data center electricity represented 1.3% of all the electricity use in the world. In year 2012 alone, global data center power demand grep 63% to 38GW. A further rise of 17% to 43GW was estimated in 2013. Moreover, Data Centers are responsible for more than 2% of total carbon dioxide emissions.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Los Centros de Datos se encuentran actualmente en cualquier sector de la economía mundial. Están compuestos por miles de servidores, dando servicio a los usuarios de forma global, las 24 horas del día y los 365 días del año. Durante los últimos años, las aplicaciones del ámbito de la e-Ciencia, como la e-Salud o las Ciudades Inteligentes han experimentado un desarrollo muy significativo. La necesidad de manejar de forma eficiente las necesidades de cómputo de aplicaciones de nueva generación, junto con la creciente demanda de recursos en aplicaciones tradicionales, han facilitado el rápido crecimiento y la proliferación de los Centros de Datos. El principal inconveniente de este aumento de capacidad ha sido el rápido y dramático incremento del consumo energético de estas infraestructuras. En 2010, la factura eléctrica de los Centros de Datos representaba el 1.3% del consumo eléctrico mundial. Sólo en el año 2012, el consumo de potencia de los Centros de Datos creció un 63%, alcanzando los 38GW. En 2013 se estimó un crecimiento de otro 17%, hasta llegar a los 43GW. Además, los Centros de Datos son responsables de más del 2% del total de emisiones de dióxido de carbono a la atmósfera. Esta tesis doctoral se enfrenta al problema energético proponiendo técnicas proactivas y reactivas conscientes de la temperatura y de la energía, que contribuyen a tener Centros de Datos más eficientes. Este trabajo desarrolla modelos de energía y utiliza el conocimiento sobre la demanda energética de la carga de trabajo a ejecutar y de los recursos de computación y refrigeración del Centro de Datos para optimizar el consumo. Además, los Centros de Datos son considerados como un elemento crucial dentro del marco de la aplicación ejecutada, optimizando no sólo el consumo del Centro de Datos sino el consumo energético global de la aplicación. Los principales componentes del consumo en los Centros de Datos son la potencia de computación utilizada por los equipos de IT, y la refrigeración necesaria para mantener los servidores dentro de un rango de temperatura de trabajo que asegure su correcto funcionamiento. Debido a la relación cúbica entre la velocidad de los ventiladores y el consumo de los mismos, las soluciones basadas en el sobre-aprovisionamiento de aire frío al servidor generalmente tienen como resultado ineficiencias energéticas. Por otro lado, temperaturas más elevadas en el procesador llevan a un consumo de fugas mayor, debido a la relación exponencial del consumo de fugas con la temperatura. Además, las características de la carga de trabajo y las políticas de asignación de recursos tienen un impacto importante en los balances entre corriente de fugas y consumo de refrigeración. La primera gran contribución de este trabajo es el desarrollo de modelos de potencia y temperatura que permiten describes estos balances entre corriente de fugas y refrigeración; así como la propuesta de estrategias para minimizar el consumo del servidor por medio de la asignación conjunta de refrigeración y carga desde una perspectiva multivariable. Cuando escalamos a nivel del Centro de Datos, observamos un comportamiento similar en términos del balance entre corrientes de fugas y refrigeración. Conforme aumenta la temperatura de la sala, mejora la eficiencia de la refrigeración. Sin embargo, este incremente de la temperatura de sala provoca un aumento en la temperatura de la CPU y, por tanto, también del consumo de fugas. Además, la dinámica de la sala tiene un comportamiento muy desigual, no equilibrado, debido a la asignación de carga y a la heterogeneidad en el equipamiento de IT. La segunda contribución de esta tesis es la propuesta de técnicas de asigación conscientes de la temperatura y heterogeneidad que permiten optimizar conjuntamente la asignación de tareas y refrigeración a los servidores. Estas estrategias necesitan estar respaldadas por modelos flexibles, que puedan trabajar en tiempo real, para describir el sistema desde un nivel de abstracción alto. Dentro del ámbito de las aplicaciones de nueva generación, las decisiones tomadas en el nivel de aplicación pueden tener un impacto dramático en el consumo energético de niveles de abstracción menores, como por ejemplo, en el Centro de Datos. Es importante considerar las relaciones entre todos los agentes computacionales implicados en el problema, de forma que puedan cooperar para conseguir el objetivo común de reducir el coste energético global del sistema. La tercera contribución de esta tesis es el desarrollo de optimizaciones energéticas para la aplicación global por medio de la evaluación de los costes de ejecutar parte del procesado necesario en otros niveles de abstracción, que van desde los nodos hasta el Centro de Datos, por medio de técnicas de balanceo de carga. Como resumen, el trabajo presentado en esta tesis lleva a cabo contribuciones en el modelado y optimización consciente del consumo por fugas y la refrigeración de servidores; el modelado de los Centros de Datos y el desarrollo de políticas de asignación conscientes de la heterogeneidad; y desarrolla mecanismos para la optimización energética de aplicaciones de nueva generación desde varios niveles de abstracción. ABSTRACT Data centers are easily found in every sector of the worldwide economy. They consist of tens of thousands of servers, serving millions of users globally and 24-7. In the last years, e-Science applications such e-Health or Smart Cities have experienced a significant development. The need to deal efficiently with the computational needs of next-generation applications together with the increasing demand for higher resources in traditional applications has facilitated the rapid proliferation and growing of data centers. A drawback to this capacity growth has been the rapid increase of the energy consumption of these facilities. In 2010, data center electricity represented 1.3% of all the electricity use in the world. In year 2012 alone, global data center power demand grew 63% to 38GW. A further rise of 17% to 43GW was estimated in 2013. Moreover, data centers are responsible for more than 2% of total carbon dioxide emissions. This PhD Thesis addresses the energy challenge by proposing proactive and reactive thermal and energy-aware optimization techniques that contribute to place data centers on a more scalable curve. This work develops energy models and uses the knowledge about the energy demand of the workload to be executed and the computational and cooling resources available at data center to optimize energy consumption. Moreover, data centers are considered as a crucial element within their application framework, optimizing not only the energy consumption of the facility, but the global energy consumption of the application. The main contributors to the energy consumption in a data center are the computing power drawn by IT equipment and the cooling power needed to keep the servers within a certain temperature range that ensures safe operation. Because of the cubic relation of fan power with fan speed, solutions based on over-provisioning cold air into the server usually lead to inefficiencies. On the other hand, higher chip temperatures lead to higher leakage power because of the exponential dependence of leakage on temperature. Moreover, workload characteristics as well as allocation policies also have an important impact on the leakage-cooling tradeoffs. The first key contribution of this work is the development of power and temperature models that accurately describe the leakage-cooling tradeoffs at the server level, and the proposal of strategies to minimize server energy via joint cooling and workload management from a multivariate perspective. When scaling to the data center level, a similar behavior in terms of leakage-temperature tradeoffs can be observed. As room temperature raises, the efficiency of data room cooling units improves. However, as we increase room temperature, CPU temperature raises and so does leakage power. Moreover, the thermal dynamics of a data room exhibit unbalanced patterns due to both the workload allocation and the heterogeneity of computing equipment. The second main contribution is the proposal of thermal- and heterogeneity-aware workload management techniques that jointly optimize the allocation of computation and cooling to servers. These strategies need to be backed up by flexible room level models, able to work on runtime, that describe the system from a high level perspective. Within the framework of next-generation applications, decisions taken at this scope can have a dramatical impact on the energy consumption of lower abstraction levels, i.e. the data center facility. It is important to consider the relationships between all the computational agents involved in the problem, so that they can cooperate to achieve the common goal of reducing energy in the overall system. The third main contribution is the energy optimization of the overall application by evaluating the energy costs of performing part of the processing in any of the different abstraction layers, from the node to the data center, via workload management and off-loading techniques. In summary, the work presented in this PhD Thesis, makes contributions on leakage and cooling aware server modeling and optimization, data center thermal modeling and heterogeneityaware data center resource allocation, and develops mechanisms for the energy optimization for next-generation applications from a multi-layer perspective.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The parsec scale properties of low power radio galaxies are reviewed here, using the available data on 12 Fanaroff-Riley type I galaxies. The most frequent radio structure is an asymmetric parsec-scale morphology--i.e., core and one-sided jet. It is shared by 9 (possibly 10) of the 12 mapped radio galaxies. One (possibly 2) of the other galaxies has a two-sided jet emission. Two sources are known from published data to show a proper motion; we present here evidence for proper motion in two more galaxies. Therefore, in the present sample we have 4 radio galaxies with a measured proper motion. One of these has a very symmetric structure and therefore should be in the plane of the sky. The results discussed here are in agreement with the predictions of the unified scheme models. Moreover, the present data indicate that the parsec scale structure in low and high power radio galaxies is essentially the same.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Response inhibition is the ability to suppress inadequate but automatically activated, prepotent or ongoing response tendencies. In the framework of motor inhibition, two distinct operating strategies have been described: “proactive” and “reactive” control modes. In the proactive modality, inhibition is recruited in advance by predictive signals, and actively maintained before its enactment. Conversely, in the reactive control mode, inhibition is phasically enacted after the detection of the inhibitory signal. To date, ample evidence points to a core cerebral network for reactive inhibition comprising the right inferior frontal gyrus (rIFG), the presupplementary motor area (pre-SMA) and the basal ganglia (BG). Moreover, fMRI studies showed that cerebral activations during proactive and reactive inhibition largely overlap. These findings suggest that at least part of the neural network for reactive inhibition is recruited in advance, priming cortical regions in preparation for the upcoming inhibition. So far, proactive and reactive inhibitory mechanisms have been investigated during tasks in which the requested response to be stopped or withheld was an “overt” action execution (AE) (i.e., a movement effectively performed). Nevertheless, inhibitory mechanisms are also relevant for motor control during “covert actions” (i.e., potential motor acts not overtly performed), such as motor imagery (MI). MI is the conscious, voluntary mental rehearsal of action representations without any overt movement. Previous studies revealed a substantial overlap of activated motor-related brain networks in premotor, parietal and subcortical regions during overtly executed and imagined movements. Notwithstanding this evidence for a shared set of cerebral regions involved in encoding actions, whether or not those actions are effectively executed, the neural bases of motor inhibition during MI, preventing covert action from being overtly performed, in spite of the activation of the motor system, remain to be fully clarified. Taking into account this background, we performed a high density EEG study evaluating cerebral mechanisms and their related sources elicited during two types of cued Go/NoGo task, requiring the execution or withholding of an overt (Go) or a covert (MI) action, respectively. The EEG analyses were performed in two steps, with different aims: 1) Analysis of the “response phase” of the cued overt and covert Go/NoGo tasks, for the evaluation of reactive inhibitory control of overt and covert actions. 2) Analysis of the “preparatory phase” of the cued overt and covert Go/NoGo EEG datasets, focusing on cerebral activities time-locked to the preparatory signals, for the evaluation of proactive inhibitory mechanisms and their related neural sources. For these purposes, a spatiotemporal analysis of the scalp electric fields was applied on the EEG data recorded during the overt and covert Go/NoGo tasks. The spatiotemporal approach provide an objective definition of time windows for source analysis, relying on the statistical proof that the electric fields are different and thus generated by different neural sources. The analysis of the “response phase” revealed that key nodes of the inhibitory circuit, underpinning inhibition of the overt movement during the NoGo response, were also activated during the MI enactment. In both cases, inhibition relied on the activation of pre-SMA and rIFG, but with different temporal patterns of activation in accord with the intended “covert” or “overt” modality of motor performance. During the NoGo condition, the pre-SMA and rIFG were sequentially activated, pointing to an early decisional role of pre-SMA and to a later role of rIFG in the enactment of inhibitory control of the overt action. Conversely, a concomitant activation of pre-SMA and rIFG emerged during the imagined motor response. This latter finding suggested that an inhibitory mechanism (likely underpinned by the rIFG), could be prewired into a prepared “covert modality” of motor response, as an intrinsic component of the MI enactment. This mechanism would allow the rehearsal of the imagined motor representations, without any overt movement. The analyses of the “preparatory phase”, confirmed in both overt and covert Go/NoGo tasks the priming of cerebral regions pertaining to putative inhibitory network, reactively triggered in the following response phase. Nonetheless, differences in the preparatory strategies between the two tasks emerged, depending on the intended “overt” or “covert” modality of the possible incoming motor response. During the preparation of the overt Go/NoGo task, the cue primed the possible overt response programs in motor and premotor cortex. At the same time, through preactivation of a pre-SMA-related decisional mechanism, it triggered a parallel preparation for the successful response selection and/or inhibition during the subsequent response phase. Conversely, the preparatory strategy for the covert Go/NoGo task was centred on the goal-oriented priming of an inhibitory mechanism related to the rIFG that, being tuned to the instructed covert modality of the motor performance and instantiated during the subsequent MI enactment, allowed the imagined response to remain a potential motor act. Taken together, the results of the present study demonstrate a substantial overlap of cerebral networks activated during proactive recruitment and subsequent reactive enactment of motor inhibition in both overt and covert actions. At the same time, our data show that preparatory cues predisposed ab initio a different organization of the cerebral areas (in particular of the pre-SMA and rIFG) involved with sensorimotor transformations and motor inhibitory control for executed and imagined actions. During the preparatory phases of our cued overt and covert Go/NoGo tasks, the different adopted strategies were tuned to the “how” of the motor performance, reflecting the intended overt and covert modality of the possible incoming action.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Context. Accretion onto supermassive black holes is believed to occur mostly in obscured active galactic nuclei (AGN). Such objects are proving rather elusive in surveys of distant galaxies, including those at X-ray energies. Aims. Our main goal is to determine whether the revised IRAC criteria of Donley et al. (2012, ApJ, 748, 142; objects with an infrared (IR) power-law spectral shape), are effective at selecting X-ray type-2 AGN (i.e., absorbed N_H > 10^22 cm^-2). Methods. We present the results from the X-ray spectral analysis of 147 AGN selected by cross-correlating the highest spectral quality ultra-deep XMM-Newton and the Spitzer/IRAC catalogues in the Chandra Deep Field South. Consequently it is biased towards sources with high S/N X-ray spectra. In order to measure the amount of intrinsic absorption in these sources, we adopt a simple X-ray spectral model that includes a power-law modified by intrinsic absorption at the redshift of each source and a possible soft X-ray component. Results. We find 21/147 sources to be heavily absorbed but the uncertainties in their obscuring column densities do not allow us to confirm their Compton-Thick nature without resorting to additional criteria. Although IR power-law galaxies are less numerous in our sample than IR non-power-law galaxies (60 versus 87 respectively), we find that the fraction of absorbed (N_H^intr > 10^22 cm^-2) AGN is significantly higher (at about 3 sigma level) for IR-power-law sources (similar to 2/3) than for those sources that do not meet this IR selection criteria (~1/2). This behaviour is particularly notable at low luminosities, but it appears to be present, although with a marginal significance, at all luminosities. Conclusions. We therefore conclude that the IR power-law method is efficient in finding X-ray-absorbed sources. We would then expect that the long-sought dominant population of absorbed AGN is abundant among IR power-law spectral shape sources not detected in X-rays.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

With no written record, the religious beliefs of the Pre-Columbian Mochica civilization are much of a mystery. This paper attempts to decipher the position of the deceased Mochicans, also known as ancestors, within the society as a whole. It discusses the ways in which we can use multiple sources of information, archaeological, iconographic, ethnohistoric and ethnographic to learn about the various aspects of Mochican culture. Specifically I will use these methods for collecting data to examine at how the Mochica viewed their deceased and to argue that part of the Mochica religious system granted their dead a supernatural ability to control human and agricultural fertility. This power would give Mochican ancestors a significant place within the society.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis originates from my interest in exploring how minorities are using social media to talk back to mainstream media. This study examines whether hashtags that trend on Twitter may impact how news stories related to minorities are covered in Canadian media. The Canadian Prime Minister Stephen Harper stated the niqab was “rooted in a culture that is anti-women” on 10 March 2015. The next day #DressCodePM trended in response to the PM’s niqab remarks. Using network gatekeeping theory, this study examines the types of sources quoted in the media stories published on 10 and 11 March 2015. The study’s goal is to explore whether using tweet quotes leads to the representation of a more diverse range of news sources. The study compares the types of sources quoted in stories that covered Harper’s comments without mentioning #DressCodePM versus stories that mention #DressCodePM. This study also uses Tuen A. van Dijk’s methodology of asking “who is speaking, how often and how prominently?” in order to examine whose voices have been privileged and whose voices have been marginalized in covering the niqab in Canadian media from the 1970s and until the days following the PM’s remarks. Network gatekeeping theory is applied in this study to assess whether the gated gained more power after #DressCodePM trended. The case study’s findings indicates that Caucasian male politicians were predominantly used as news sources in covering stories related to the niqab for the past 38 years in the Globe and Mail. The sourcing pattern of favouring politicians continued in Canadian print and online media on 10 March 2015 following Harper’s niqab comments. However, ordinary Canadian women, including Muslim women, were used more often than politicians as news sources in the stories about #DressCodePM that were published on 11 March 2015. The gated media users were able to gain power and attract Canadian Media’s attention by widely spreading #DressCodePM. This study draws attention to the lack of diversity of sources used in Canadian political news stories, yet this study also shows it is possible for the gated media users to amplify their voices through hashtag activism.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

"Volume I contains all unclassified papers; Volume II contains only classified material."

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Vols. 1, 3-12, 14-16, 18-20, 22, 25-27 have no date on t. p.