941 resultados para Power quality indices


Relevância:

30.00% 30.00%

Publicador:

Resumo:

The quality and the reliability of the power generated by large grid-connected photovoltaic (PV) plants are negatively affected by the source characteristic variability. This paper deals with the smoothing of power fluctuations because of geographical dispersion of PV systems. The fluctuation frequency and the maximum fluctuation registered at a PV plant ensemble are analyzed to study these effects. We propose an empirical expression to compare the fluctuation attenuation because of both the size and the number of PV plants grouped. The convolution of single PV plants frequency distribution functions has turned out to be a successful tool to statistically describe the behavior of an ensemble of PV plants and determine their maximum output fluctuation. Our work is based on experimental 1-s data collected throughout 2009 from seven PV plants, 20 MWp in total, separated between 6 and 360 km.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

To date, the majority of quality controls performed at PV plants are based on the measurement of a small sample of individual modules. Consequently, there is very little representative data on the real Standard Test Conditions (STC) power output values for PV generators. This paper presents the power output values for more than 1300 PV generators having a total installed power capacity of almost 15.3 MW. The values were obtained by the INGEPER-UPNA group, in collaboration with the IES-UPM, through a study to monitor the power output of a number of PV plants from 2006 to 2009. This work has made it possible to determine, amongst other things, the power dispersion that can be expected amongst generators made by different manufacturers, amongst generators made by the same manufacturer but comprising modules of different nameplate ratings and also amongst generators formed by modules with the same characteristics. The work also analyses the STC power output evolution over time in the course of this 4-year study. The values presented here could be considered to be representative of generators with fault-free modules.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The effects of power and time conditions of in situ N2 plasma treatment, prior to silicon nitride (SiN) passivation, were investigated on an AlGaN/GaN high-electron mobility transistor (HEMT). These studies reveal that N2 plasma power is a critical parameter to control the SiN/AlGaN interface quality, which directly affects the 2-D electron gas density. Significant enhancement in the HEMT characteristics was observed by using a low power N2 plasma pretreatment. In contrast, a marked gradual reduction in the maximum drain-source current density (IDS max) and maximum transconductance (gm max), as well as in fT and fmax, was observed as the N2 plasma power increases (up to 40% decrease for 210 W). Different mechanisms were proposed to be dominant as a function of the discharge power range. A good correlation was observed between the device electrical characteristics and the surface assessment by atomic force microscopy and Kelvin force microscopy techniques.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Tradicionalmente, el uso de técnicas de análisis de datos ha sido una de las principales vías para el descubrimiento de conocimiento oculto en grandes cantidades de datos, recopilados por expertos en diferentes dominios. Por otra parte, las técnicas de visualización también se han usado para mejorar y facilitar este proceso. Sin embargo, existen limitaciones serias en la obtención de conocimiento, ya que suele ser un proceso lento, tedioso y en muchas ocasiones infructífero, debido a la dificultad de las personas para comprender conjuntos de datos de grandes dimensiones. Otro gran inconveniente, pocas veces tenido en cuenta por los expertos que analizan grandes conjuntos de datos, es la degradación involuntaria a la que someten a los datos durante las tareas de análisis, previas a la obtención final de conclusiones. Por degradación quiere decirse que los datos pueden perder sus propiedades originales, y suele producirse por una reducción inapropiada de los datos, alterando así su naturaleza original y llevando en muchos casos a interpretaciones y conclusiones erróneas que podrían tener serias implicaciones. Además, este hecho adquiere una importancia trascendental cuando los datos pertenecen al dominio médico o biológico, y la vida de diferentes personas depende de esta toma final de decisiones, en algunas ocasiones llevada a cabo de forma inapropiada. Ésta es la motivación de la presente tesis, la cual propone un nuevo framework visual, llamado MedVir, que combina la potencia de técnicas avanzadas de visualización y minería de datos para tratar de dar solución a estos grandes inconvenientes existentes en el proceso de descubrimiento de información válida. El objetivo principal es hacer más fácil, comprensible, intuitivo y rápido el proceso de adquisición de conocimiento al que se enfrentan los expertos cuando trabajan con grandes conjuntos de datos en diferentes dominios. Para ello, en primer lugar, se lleva a cabo una fuerte disminución en el tamaño de los datos con el objetivo de facilitar al experto su manejo, y a la vez preservando intactas, en la medida de lo posible, sus propiedades originales. Después, se hace uso de efectivas técnicas de visualización para representar los datos obtenidos, permitiendo al experto interactuar de forma sencilla e intuitiva con los datos, llevar a cabo diferentes tareas de análisis de datos y así estimular visualmente su capacidad de comprensión. De este modo, el objetivo subyacente se basa en abstraer al experto, en la medida de lo posible, de la complejidad de sus datos originales para presentarle una versión más comprensible, que facilite y acelere la tarea final de descubrimiento de conocimiento. MedVir se ha aplicado satisfactoriamente, entre otros, al campo de la magnetoencefalografía (MEG), que consiste en la predicción en la rehabilitación de lesiones cerebrales traumáticas (Traumatic Brain Injury (TBI) rehabilitation prediction). Los resultados obtenidos demuestran la efectividad del framework a la hora de acelerar y facilitar el proceso de descubrimiento de conocimiento sobre conjuntos de datos reales. ABSTRACT Traditionally, the use of data analysis techniques has been one of the main ways of discovering knowledge hidden in large amounts of data, collected by experts in different domains. Moreover, visualization techniques have also been used to enhance and facilitate this process. However, there are serious limitations in the process of knowledge acquisition, as it is often a slow, tedious and many times fruitless process, due to the difficulty for human beings to understand large datasets. Another major drawback, rarely considered by experts that analyze large datasets, is the involuntary degradation to which they subject the data during analysis tasks, prior to obtaining the final conclusions. Degradation means that data can lose part of their original properties, and it is usually caused by improper data reduction, thereby altering their original nature and often leading to erroneous interpretations and conclusions that could have serious implications. Furthermore, this fact gains a trascendental importance when the data belong to medical or biological domain, and the lives of people depends on the final decision-making, which is sometimes conducted improperly. This is the motivation of this thesis, which proposes a new visual framework, called MedVir, which combines the power of advanced visualization techniques and data mining to try to solve these major problems existing in the process of discovery of valid information. Thus, the main objective is to facilitate and to make more understandable, intuitive and fast the process of knowledge acquisition that experts face when working with large datasets in different domains. To achieve this, first, a strong reduction in the size of the data is carried out in order to make the management of the data easier to the expert, while preserving intact, as far as possible, the original properties of the data. Then, effective visualization techniques are used to represent the obtained data, allowing the expert to interact easily and intuitively with the data, to carry out different data analysis tasks, and so visually stimulating their comprehension capacity. Therefore, the underlying objective is based on abstracting the expert, as far as possible, from the complexity of the original data to present him a more understandable version, thus facilitating and accelerating the task of knowledge discovery. MedVir has been succesfully applied to, among others, the field of magnetoencephalography (MEG), which consists in predicting the rehabilitation of Traumatic Brain Injury (TBI). The results obtained successfully demonstrate the effectiveness of the framework to accelerate and facilitate the process of knowledge discovery on real world datasets.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Strict technical quality assurance procedures are essential for PV plant bankability. When large-scale PV plants are concerned, this is typically accomplished in three consecutive phases: an energy yield forecast, that is performed at the beginning of the project and is typically accomplished by means of a simulation exercise performed with dedicated software; a reception test campaign, that is performed at the end of the commissioning and consists of a set of tests for determining the efficiency and the reliability of the PV plant devices; and a performance analysis of the first years of operation, that consists in comparing the real energy production with the one calculated from the recorded operating conditions and taking into account the maintenance records. In the last six years, IES-UPM has offered both indoor and on-site quality control campaigns for more than 60 PV plants, with an accumulated power of more than 300 MW, in close contact with Engineering, Procurement and Construction Contractors and financial entities. This paper presents the lessons learned from such experience.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

En el campo de la fusión nuclear y desarrollándose en paralelo a ITER (International Thermonuclear Experimental Reactor), el proyecto IFMIF (International Fusion Material Irradiation Facility) se enmarca dentro de las actividades complementarias encaminadas a solucionar las barreras tecnológicas que aún plantea la fusión. En concreto IFMIF es una instalación de irradiación cuya misión es caracterizar materiales resistentes a condiciones extremas como las esperadas en los futuros reactores de fusión como DEMO (DEMOnstration power plant). Consiste de dos aceleradores de deuterones que proporcionan un haz de 125 mA y 40 MeV cada uno, que al colisionar con un blanco de litio producen un flujo neutrónico intenso (1017 neutrones/s) con un espectro similar al de los neutrones de fusión [1], [2]. Dicho flujo neutrónico es empleado para irradiar los diferentes materiales candidatos a ser empleados en reactores de fusión, y las muestras son posteriormente examinadas en la llamada instalación de post-irradiación. Como primer paso en tan ambicioso proyecto, una fase de validación y diseño llamada IFMIFEVEDA (Engineering Validation and Engineering Design Activities) se encuentra actualmente en desarrollo. Una de las actividades contempladas en esta fase es la construcción y operación de una acelarador prototipo llamado LIPAc (Linear IFMIF Prototype Accelerator). Se trata de un acelerador de deuterones de alta intensidad idéntico a la parte de baja energía de los aceleradores de IFMIF. Los componentes del LIPAc, que será instalado en Japón, son suministrados por diferentes países europeos. El acelerador proporcionará un haz continuo de deuterones de 9 MeV con una potencia de 1.125 MW que tras ser caracterizado con diversos instrumentos deberá pararse de forma segura. Para ello se requiere un sistema denominado bloque de parada (Beam Dump en inglés) que absorba la energía del haz y la transfiera a un sumidero de calor. España tiene el compromiso de suministrar este componente y CIEMAT (Centro de Investigaciones Energéticas Medioambientales y Tecnológicas) es responsable de dicha tarea. La pieza central del bloque de parada, donde se para el haz de iones, es un cono de cobre con un ángulo de 3.5o, 2.5 m de longitud y 5 mm de espesor. Dicha pieza está refrigerada por agua que fluye en su superficie externa por el canal que se forma entre el cono de cobre y otra pieza concéntrica con éste. Este es el marco en que se desarrolla la presente tesis, cuyo objeto es el diseño del sistema de refrigeración del bloque de parada del LIPAc. El diseño se ha realizado utilizando un modelo simplificado unidimensional. Se han obtenido los parámetros del agua (presión, caudal, pérdida de carga) y la geometría requerida en el canal de refrigeración (anchura, rugosidad) para garantizar la correcta refrigeración del bloque de parada. Se ha comprobado que el diseño permite variaciones del haz respecto a la situación nominal siendo el flujo crítico calorífico al menos 2 veces superior al nominal. Se han realizado asimismo simulaciones fluidodinámicas 3D con ANSYS-CFX en aquellas zonas del canal de refrigeración que lo requieren. El bloque de parada se activará como consecuencia de la interacción del haz de partículas lo que impide cualquier cambio o reparación una vez comenzada la operación del acelerador. Por ello el diseño ha de ser muy robusto y todas las hipótesis utilizadas en la realización de éste deben ser cuidadosamente comprobadas. Gran parte del esfuerzo de la tesis se centra en la estimación del coeficiente de transferencia de calor que es determinante en los resultados obtenidos, y que se emplea además como condición de contorno en los cálculos mecánicos. Para ello por un lado se han buscado correlaciones cuyo rango de aplicabilidad sea adecuado para las condiciones del bloque de parada (canal anular, diferencias de temperatura agua-pared de decenas de grados). En un segundo paso se han comparado los coeficientes de película obtenidos a partir de la correlación seleccionada (Petukhov-Gnielinski) con los que se deducen de simulaciones fluidodinámicas, obteniendo resultados satisfactorios. Por último se ha realizado una validación experimental utilizando un prototipo y un circuito hidráulico que proporciona un flujo de agua con los parámetros requeridos en el bloque de parada. Tras varios intentos y mejoras en el experimento se han obtenido los coeficientes de película para distintos caudales y potencias de calentamiento. Teniendo en cuenta la incertidumbre de las medidas, los valores experimentales concuerdan razonablemente bien (en el rango de 15%) con los deducidos de las correlaciones. Por motivos radiológicos es necesario controlar la calidad del agua de refrigeración y minimizar la corrosión del cobre. Tras un estudio bibliográfico se identificaron los parámetros del agua más adecuados (conductividad, pH y concentración de oxígeno disuelto). Como parte de la tesis se ha realizado asimismo un estudio de la corrosión del circuito de refrigeración del bloque de parada con el doble fin de determinar si puede poner en riesgo la integridad del componente, y de obtener una estimación de la velocidad de corrosión para dimensionar el sistema de purificación del agua. Se ha utilizado el código TRACT (TRansport and ACTivation code) adaptándalo al caso del bloque de parada, para lo cual se trabajó con el responsable (Panos Karditsas) del código en Culham (UKAEA). Los resultados confirman que la corrosión del cobre en las condiciones seleccionadas no supone un problema. La Tesis se encuentra estructurada de la siguiente manera: En el primer capítulo se realiza una introducción de los proyectos IFMIF y LIPAc dentro de los cuales se enmarca esta Tesis. Además se describe el bloque de parada, siendo el diseño del sistema de rerigeración de éste el principal objetivo de la Tesis. En el segundo y tercer capítulo se realiza un resumen de la base teórica así como de las diferentes herramientas empleadas en el diseño del sistema de refrigeración. El capítulo cuarto presenta los resultados del relativos al sistema de refrigeración. Tanto los obtenidos del estudio unidimensional, como los obtenidos de las simulaciones fluidodinámicas 3D mediante el empleo del código ANSYS-CFX. En el quinto capítulo se presentan los resultados referentes al análisis de corrosión del circuito de refrigeración del bloque de parada. El capítulo seis se centra en la descripción del montaje experimental para la obtención de los valores de pérdida de carga y coeficiente de transferencia del calor. Asimismo se presentan los resultados obtenidos en dichos experimentos. Finalmente encontramos un capítulo de apéndices en el que se describen una serie de experimentos llevados a cabo como pasos intermedios en la obtención del resultado experimental del coeficiente de película. También se presenta el código informático empleado para el análisis unidimensional del sistema de refrigeración del bloque de parada llamado CHICA (Cooling and Heating Interaction and Corrosion Analysis). ABSTRACT In the nuclear fusion field running in parallel to ITER (International Thermonuclear Experimental Reactor) as one of the complementary activities headed towards solving the technological barriers, IFMIF (International Fusion Material Irradiation Facility) project aims to provide an irradiation facility to qualify advanced materials resistant to extreme conditions like the ones expected in future fusion reactors like DEMO (DEMOnstration Power Plant). IFMIF consists of two constant wave deuteron accelerators delivering a 125 mA and 40 MeV beam each that will collide on a lithium target producing an intense neutron fluence (1017 neutrons/s) with a similar spectra to that of fusion neutrons [1], [2]. This neutron flux is employed to irradiate the different material candidates to be employed in the future fusion reactors, and the samples examined after irradiation at the so called post-irradiative facilities. As a first step in such an ambitious project, an engineering validation and engineering design activity phase called IFMIF-EVEDA (Engineering Validation and Engineering Design Activities) is presently going on. One of the activities consists on the construction and operation of an accelerator prototype named LIPAc (Linear IFMIF Prototype Accelerator). It is a high intensity deuteron accelerator identical to the low energy part of the IFMIF accelerators. The LIPAc components, which will be installed in Japan, are delivered by different european countries. The accelerator supplies a 9 MeV constant wave beam of deuterons with a power of 1.125 MW, which after being characterized by different instruments has to be stopped safely. For such task a beam dump to absorb the beam energy and take it to a heat sink is needed. Spain has the compromise of delivering such device and CIEMAT (Centro de Investigaciones Energéticas Medioambientales y Tecnológicas) is responsible for such task. The central piece of the beam dump, where the ion beam is stopped, is a copper cone with an angle of 3.5o, 2.5 m long and 5 mm width. This part is cooled by water flowing on its external surface through the channel formed between the copper cone and a concentric piece with the latter. The thesis is developed in this realm, and its objective is designing the LIPAc beam dump cooling system. The design has been performed employing a simplified one dimensional model. The water parameters (pressure, flow, pressure loss) and the required annular channel geometry (width, rugoisty) have been obtained guaranteeing the correct cooling of the beam dump. It has been checked that the cooling design allows variations of the the beam with respect to the nominal position, being the CHF (Critical Heat Flux) at least twice times higher than the nominal deposited heat flux. 3D fluid dynamic simulations employing ANSYS-CFX code in the beam dump cooling channel sections which require a more thorough study have also been performed. The beam dump will activateasaconsequenceofthe deuteron beam interaction, making impossible any change or maintenance task once the accelerator operation has started. Hence the design has to be very robust and all the hypotheses employed in the design mustbecarefully checked. Most of the work in the thesis is concentrated in estimating the heat transfer coefficient which is decisive in the obtained results, and is also employed as boundary condition in the mechanical analysis. For such task, correlations which applicability range is the adequate for the beam dump conditions (annular channel, water-surface temperature differences of tens of degrees) have been compiled. In a second step the heat transfer coefficients obtained from the selected correlation (Petukhov- Gnielinski) have been compared with the ones deduced from the 3D fluid dynamic simulations, obtaining satisfactory results. Finally an experimental validation has been performed employing a prototype and a hydraulic circuit that supplies a flow with the requested parameters in the beam dump. After several tries and improvements in the experiment, the heat transfer coefficients for different flows and heating powers have been obtained. Considering the uncertainty in the measurements the experimental values agree reasonably well (in the order of 15%) with the ones obtained from the correlations. Due to radiological reasons the quality of the cooling water must be controlled, hence minimizing the copper corrosion. After performing a bibligraphic study the most adequate water parameters were identified (conductivity, pH and dissolved oxygen concentration). As part of this thesis a corrosion study of the beam dump cooling circuit has been performed with the double aim of determining if corrosion can pose a risk for the copper beam dump , and obtaining an estimation of the corrosion velocitytodimension the water purification system. TRACT code(TRansport and ACTivation) has been employed for such study adapting the code for the beam dump case. For such study a collaboration with the code responsible (Panos Karditsas) at Culham (UKAEA) was established. The work developed in this thesis has supposed the publication of three articles in JCR journals (”Journal of Nuclear Materials” y ”Fusion Engineering and Design”), as well as presentations in more than four conferences and relevant meetings.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

El aprendizaje automático y la cienciometría son las disciplinas científicas que se tratan en esta tesis. El aprendizaje automático trata sobre la construcción y el estudio de algoritmos que puedan aprender a partir de datos, mientras que la cienciometría se ocupa principalmente del análisis de la ciencia desde una perspectiva cuantitativa. Hoy en día, los avances en el aprendizaje automático proporcionan las herramientas matemáticas y estadísticas para trabajar correctamente con la gran cantidad de datos cienciométricos almacenados en bases de datos bibliográficas. En este contexto, el uso de nuevos métodos de aprendizaje automático en aplicaciones de cienciometría es el foco de atención de esta tesis doctoral. Esta tesis propone nuevas contribuciones en el aprendizaje automático que podrían arrojar luz sobre el área de la cienciometría. Estas contribuciones están divididas en tres partes: Varios modelos supervisados (in)sensibles al coste son aprendidos para predecir el éxito científico de los artículos y los investigadores. Los modelos sensibles al coste no están interesados en maximizar la precisión de clasificación, sino en la minimización del coste total esperado derivado de los errores ocasionados. En este contexto, los editores de revistas científicas podrían disponer de una herramienta capaz de predecir el número de citas de un artículo en el fututo antes de ser publicado, mientras que los comités de promoción podrían predecir el incremento anual del índice h de los investigadores en los primeros años. Estos modelos predictivos podrían allanar el camino hacia nuevos sistemas de evaluación. Varios modelos gráficos probabilísticos son aprendidos para explotar y descubrir nuevas relaciones entre el gran número de índices bibliométricos existentes. En este contexto, la comunidad científica podría medir cómo algunos índices influyen en otros en términos probabilísticos y realizar propagación de la evidencia e inferencia abductiva para responder a preguntas bibliométricas. Además, la comunidad científica podría descubrir qué índices bibliométricos tienen mayor poder predictivo. Este es un problema de regresión multi-respuesta en el que el papel de cada variable, predictiva o respuesta, es desconocido de antemano. Los índices resultantes podrían ser muy útiles para la predicción, es decir, cuando se conocen sus valores, el conocimiento de cualquier valor no proporciona información sobre la predicción de otros índices bibliométricos. Un estudio bibliométrico sobre la investigación española en informática ha sido realizado bajo la cultura de publicar o morir. Este estudio se basa en una metodología de análisis de clusters que caracteriza la actividad en la investigación en términos de productividad, visibilidad, calidad, prestigio y colaboración internacional. Este estudio también analiza los efectos de la colaboración en la productividad y la visibilidad bajo diferentes circunstancias. ABSTRACT Machine learning and scientometrics are the scientific disciplines which are covered in this dissertation. Machine learning deals with the construction and study of algorithms that can learn from data, whereas scientometrics is mainly concerned with the analysis of science from a quantitative perspective. Nowadays, advances in machine learning provide the mathematical and statistical tools for properly working with the vast amount of scientometrics data stored in bibliographic databases. In this context, the use of novel machine learning methods in scientometrics applications is the focus of attention of this dissertation. This dissertation proposes new machine learning contributions which would shed light on the scientometrics area. These contributions are divided in three parts: Several supervised cost-(in)sensitive models are learned to predict the scientific success of articles and researchers. Cost-sensitive models are not interested in maximizing classification accuracy, but in minimizing the expected total cost of the error derived from mistakes in the classification process. In this context, publishers of scientific journals could have a tool capable of predicting the citation count of an article in the future before it is published, whereas promotion committees could predict the annual increase of the h-index of researchers within the first few years. These predictive models would pave the way for new assessment systems. Several probabilistic graphical models are learned to exploit and discover new relationships among the vast number of existing bibliometric indices. In this context, scientific community could measure how some indices influence others in probabilistic terms and perform evidence propagation and abduction inference for answering bibliometric questions. Also, scientific community could uncover which bibliometric indices have a higher predictive power. This is a multi-output regression problem where the role of each variable, predictive or response, is unknown beforehand. The resulting indices could be very useful for prediction purposes, that is, when their index values are known, knowledge of any index value provides no information on the prediction of other bibliometric indices. A scientometric study of the Spanish computer science research is performed under the publish-or-perish culture. This study is based on a cluster analysis methodology which characterizes the research activity in terms of productivity, visibility, quality, prestige and international collaboration. This study also analyzes the effects of collaboration on productivity and visibility under different circumstances.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Dynamic and Partial Reconfiguration (DPR) allows a system to be able to modify certain parts of itself during run-time. This feature gives rise to the capability of evolution: changing parts of the configuration according to the online evaluation of performance or other parameters. The evolution is achieved through a bio-inspired model in which the features of the system are identified as genes. The objective of the evolution may not be a single one; in this work, power consumption is taken into consideration, together with the quality of filtering, as the measure of performance, of a noisy image. Pareto optimality is applied to the evolutionary process, in order to find a representative set of optimal solutions as for performance and power consumption. The main contributions of this paper are: implementing an evolvable system on a low-power Spartan-6 FPGA included in a Wireless Sensor Network node and, by enabling the availability of a real measure of power consumption at run-time, achieving the capability of multi-objective evolution, that yields different optimal configurations, among which the selected one will depend on the relative “weights” of performance and power consumption.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Torulaspora delbrueckii is a non-Saccharomyces yeast with interesting metabolic and physiological properties of potential use in oenology. This work examines the fermentative behaviour of five strains of T. delbrueckii in sequential fermentations with Saccharomyces cerevisiae, analysing the formation of aromatic compounds, polyalcohols and pigments. The fermentative power of these five strains ranged between 7.6 and 9.0% v/v ethanol; the associated volatile acidity was 0.2e0.7 g/l acetic acid. The production of glycerol was inferior to that of S. cerevisiae alone. The mean 2,3-butanediol concentration reached in single-culture S. cerevisiae fermentations was 73% higher than in the five sequential T. delbrueckii/S. cerevisiae fermentations. However, these fermentations produced larger quantities of diacetyl, ethyl lactate and 2-phenylethyl acetate than single-culture S. cerevisiae fermentation. 3-ethoxy propanol was produced only in the sequential fermentations. The five sequential fermentations produced smaller quantities of vitisin A and B than single-culture S. cerevisiae fermentation. In tests performed prior to the addition of the S. cerevisiae in the sequential fermentations, none of the T. delbrueckii strains showed any extracellular hydroxycinnamate decarboxylase activity. They therefore produced no vinyl phenolic pyranoanthocyanins.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Context. Accretion onto supermassive black holes is believed to occur mostly in obscured active galactic nuclei (AGN). Such objects are proving rather elusive in surveys of distant galaxies, including those at X-ray energies. Aims. Our main goal is to determine whether the revised IRAC criteria of Donley et al. (2012, ApJ, 748, 142; objects with an infrared (IR) power-law spectral shape), are effective at selecting X-ray type-2 AGN (i.e., absorbed N_H > 10^22 cm^-2). Methods. We present the results from the X-ray spectral analysis of 147 AGN selected by cross-correlating the highest spectral quality ultra-deep XMM-Newton and the Spitzer/IRAC catalogues in the Chandra Deep Field South. Consequently it is biased towards sources with high S/N X-ray spectra. In order to measure the amount of intrinsic absorption in these sources, we adopt a simple X-ray spectral model that includes a power-law modified by intrinsic absorption at the redshift of each source and a possible soft X-ray component. Results. We find 21/147 sources to be heavily absorbed but the uncertainties in their obscuring column densities do not allow us to confirm their Compton-Thick nature without resorting to additional criteria. Although IR power-law galaxies are less numerous in our sample than IR non-power-law galaxies (60 versus 87 respectively), we find that the fraction of absorbed (N_H^intr > 10^22 cm^-2) AGN is significantly higher (at about 3 sigma level) for IR-power-law sources (similar to 2/3) than for those sources that do not meet this IR selection criteria (~1/2). This behaviour is particularly notable at low luminosities, but it appears to be present, although with a marginal significance, at all luminosities. Conclusions. We therefore conclude that the IR power-law method is efficient in finding X-ray-absorbed sources. We would then expect that the long-sought dominant population of absorbed AGN is abundant among IR power-law spectral shape sources not detected in X-rays.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Purpose: To examine a single-optic accommodating intraocular lens (IOL) visual performance by correlating IOL implanted eyes’ defocus curve with the intraocular aberrometric profile and the impact on the quality of life (QOL). Methods: Prospective consecutive case series study including a total of 25 eyes of 14 patients with ages ranging between 52 and 79 years old. All cases underwent cataract surgery with implantation of the single-optic accommodating IOL Crystalens HD (Bausch & Lomb). Distance and near visual acuity outcomes, intraocular aberrations, the defocus curve and QOL (NEI VFQ-25) were evaluated 3 months after surgery. Results: A significant improvement in distance visual acuity was found postoperatively (p = 0.02). Mean postoperative LogMAR uncorrected near visual acuity was 0.44 ± 0.23 (20/30). 60% of eyes had a postoperative addition between 0 and 1.5 diopters (D). The defocus curve showed an area of maximum visual acuity for the levels of defocus corresponding to distance and intermediate vision (−1 to +0.5 D). Postoperative intermediate visual acuity correlated significantly some QOL indices (r ≥ 0.51, p ≤ 0.03; difficulty in going down steps or seeing how people react to things that patient says) as well as with J0 component of manifest cylinder. Postoperative distance-corrected near visual acuity correlated significantly with age (r = 0.65, p < 0.01). Conclusions: This accommodating IOL seems to be able to restore the distance visual function as well as to provide an improvement in intermediate and near vision with a significant impact on patient's QOL, although limited by age and astigmatism. Future studies with larger sample sizes should confirm all these trends.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The Illinois Environmental Protection Agency (Illinois EPA) was asked by the Illinois General Assembly to examine whether the State should address further potential restrictions on power plant pollution. This request was made under Section 9-10 of the Environmental Protection Act (Act). This is a report of the Illinois EPA's findings. The Illinois EPA has prepared this report of its findings to date based on consideration of a broad spectrum of issues including health benefits, the impact of the reliability of the power grid, the impact on consumer utility rates and the impact on jobs and Illinois' economy. It provides an overview of the principal issues, presents a review of the information we have gathered that addresses those issues, lists information gaps, and uncertainties and finally, lists the work that remains to develop a solution that does not create unintended adverse economic consequences for the people of Illinois.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Includes bibliographical references.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Bibliography: p. 90-92.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Architect: Smith, Hinchman & Grylls. Built 1914. Also called Power Plant or Power House or Heating PlantPublisher: Geo. Wahr. On verso: Post Cards of Quality. - The Albertype Co., Brooklyn, N.Y.