980 resultados para Power Semiconductor Devices


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Los mecanismos de producción y reproducción de la influencia política es una importante área de estudio de la ciencia política en las últimas décadas. En la misma se han disputado diferentes teorías, desde las que plantean la influencia predominante de grupos de poder y sectores corporativos tanto en las decisiones del estado como en las no decisiones, hasta los que plantean que existe la puja de diferentes intereses dentro del Estado pero que no existe ningún grupo predominante. El análisis de redes (network analysis) permite estudiar este objeto mediante la observación de la estructura de relaciones de los actores influyentes dentro de la política provincial. En esta area de estudio, este proyecto propone estudiar de qué manera se produce y reproduce la influencia política en la Provincia de Córdoba.Las hipótesis que plantea el proyecto son las siguientes: H1- La estructura del poder socio-político provincial adquiere una configuración reticular en la que existe un núcleo de actores que representan intereses tradicionales organizados y permite un escaso acceso de nuevas organizaciones que defienden intereses sociales difusos. H2- En el proceso de influencia sociopolítica provincial operan mecanismos de influencia interpersonales directos e indirectos (Brokerage) que permiten a los actores acceder e influir en los decisores públicos. H3- En el proceso de influencia socio-política interviene una diversidad de recursos de poder que los actores utilizan para influir las políticas públicas. Para esto se propone como objetivos del proyecto los siguientes: 1- Identificar y analizar la estructura de poder e influencia que subyace a la política provincial. 2- Analizar los intereses, actores y sectores incluidos y excluidos de la estructura de influencia política. 3- Analizar los mecanismos y recursos de producción y reproducción del poder y la influencia. 4- Analizar las áreas de política del estado provincial que resultan lugar de influencia de los actores y sectores que configuran la estructura de poder socio-política. 5- Analizar el sistema de decisión colectiva (policy domain) en dos áreas de política provincial. 6- Analizar los recursos que posibilitan a los actores ejercer poder e influencia en las áreas de políticas estudiadas. Para la verificación empírica de las hipótesis se realiza un diseño de investigación que incluye el mapeo y análisis de dos tipos de redes políticas diferentes, la "red de influencia en la política provincial" y la red de influencia en un "área de políticas públicas". La reconstrucción de las redes políticas se realizará mediante entrevistas semi-estructuradas a actores sociales y políticos en un muestreo no probabilístico de tipo "bola de nieve". La investigación pretende realizar un aporte a la comprensión de la coordinación política y, en tal sentido, espera alcanzar una adecuada descripción y comprensión de los procesos de influencia y de estructuración del poder en la Provincia de Córdoba.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

El avance en la potencia de cómputo en nuestros días viene dado por la paralelización del procesamiento, dadas las características que disponen las nuevas arquitecturas de hardware. Utilizar convenientemente este hardware impacta en la aceleración de los algoritmos en ejecución (programas). Sin embargo, convertir de forma adecuada el algoritmo en su forma paralela es complejo, y a su vez, esta forma, es específica para cada tipo de hardware paralelo. En la actualidad los procesadores de uso general más comunes son los multicore, procesadores paralelos, también denominados Symmetric Multi-Processors (SMP). Hoy en día es difícil hallar un procesador para computadoras de escritorio que no tengan algún tipo de paralelismo del caracterizado por los SMP, siendo la tendencia de desarrollo, que cada día nos encontremos con procesadores con mayor numero de cores disponibles. Por otro lado, los dispositivos de procesamiento de video (Graphics Processor Units - GPU), a su vez, han ido desarrollando su potencia de cómputo por medio de disponer de múltiples unidades de procesamiento dentro de su composición electrónica, a tal punto que en la actualidad no es difícil encontrar placas de GPU con capacidad de 200 a 400 hilos de procesamiento paralelo. Estos procesadores son muy veloces y específicos para la tarea que fueron desarrollados, principalmente el procesamiento de video. Sin embargo, como este tipo de procesadores tiene muchos puntos en común con el procesamiento científico, estos dispositivos han ido reorientándose con el nombre de General Processing Graphics Processor Unit (GPGPU). A diferencia de los procesadores SMP señalados anteriormente, las GPGPU no son de propósito general y tienen sus complicaciones para uso general debido al límite en la cantidad de memoria que cada placa puede disponer y al tipo de procesamiento paralelo que debe realizar para poder ser productiva su utilización. Los dispositivos de lógica programable, FPGA, son dispositivos capaces de realizar grandes cantidades de operaciones en paralelo, por lo que pueden ser usados para la implementación de algoritmos específicos, aprovechando el paralelismo que estas ofrecen. Su inconveniente viene derivado de la complejidad para la programación y el testing del algoritmo instanciado en el dispositivo. Ante esta diversidad de procesadores paralelos, el objetivo de nuestro trabajo está enfocado en analizar las características especificas que cada uno de estos tienen, y su impacto en la estructura de los algoritmos para que su utilización pueda obtener rendimientos de procesamiento acordes al número de recursos utilizados y combinarlos de forma tal que su complementación sea benéfica. Específicamente, partiendo desde las características del hardware, determinar las propiedades que el algoritmo paralelo debe tener para poder ser acelerado. Las características de los algoritmos paralelos determinará a su vez cuál de estos nuevos tipos de hardware son los mas adecuados para su instanciación. En particular serán tenidos en cuenta el nivel de dependencia de datos, la necesidad de realizar sincronizaciones durante el procesamiento paralelo, el tamaño de datos a procesar y la complejidad de la programación paralela en cada tipo de hardware. Today´s advances in high-performance computing are driven by parallel processing capabilities of available hardware architectures. These architectures enable the acceleration of algorithms when thes ealgorithms are properly parallelized and exploit the specific processing power of the underneath architecture. Most current processors are targeted for general pruposes and integrate several processor cores on a single chip, resulting in what is known as a Symmetric Multiprocessing (SMP) unit. Nowadays even desktop computers make use of multicore processors. Meanwhile, the industry trend is to increase the number of integrated rocessor cores as technology matures. On the other hand, Graphics Processor Units (GPU), originally designed to handle only video processing, have emerged as interesting alternatives to implement algorithm acceleration. Current available GPUs are able to implement from 200 to 400 threads for parallel processing. Scientific computing can be implemented in these hardware thanks to the programability of new GPUs that have been denoted as General Processing Graphics Processor Units (GPGPU).However, GPGPU offer little memory with respect to that available for general-prupose processors; thus, the implementation of algorithms need to be addressed carefully. Finally, Field Programmable Gate Arrays (FPGA) are programmable devices which can implement hardware logic with low latency, high parallelism and deep pipelines. Thes devices can be used to implement specific algorithms that need to run at very high speeds. However, their programmability is harder that software approaches and debugging is typically time-consuming. In this context where several alternatives for speeding up algorithms are available, our work aims at determining the main features of thes architectures and developing the required know-how to accelerate algorithm execution on them. We look at identifying those algorithms that may fit better on a given architecture as well as compleme

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The objective of this dissertation is to investigate the effect wind energy has on the Electricity Supply Industry in Ireland. Wind power generation is a source of renewable energy that is in abundant supply in Ireland and is fast becoming a resource that Ireland is depending on as a diverse and secure of supply of energy. However, wind is an intermittent resource and coupled with a variable demand, there are integration issues with balancing demand and supply effectively. To maintain a secure supply of electricity to customers, it is necessary that wind power has an operational reserve to ensure appropriate backup for situations where there is low wind but high demand. This dissertation examines the affect of this integration by comparing wind generation to that of conventional generation in the national grid. This is done to ascertain the cost benefits of wind power generation against a scenario with no wind generation. Then, the analysis examines to see if wind power can meet the pillars of sustainability. This entails looking at wind in a practical scenario to observe how it meets these pillars under the criteria of environmental responsibility, displacement of conventional fuel, cost competitiveness and security of supply.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Due to the global crisis o f climate change many countries throughout the world are installing the renewable energy o f wind power into their electricity system. Wind energy causes complications when it is being integrated into the electricity system due its intermittent nature. Additionally winds intennittency can result in penalties being enforced due to the deregulation in the electricity market. Wind power forecasting can play a pivotal role to ease the integration o f wind energy. Wind power forecasts at 24 and 48 hours ahead of time are deemed the most crucial for determining an appropriate balance on the power system. In the electricity market wind power forecasts can also assist market participants in terms o f applying a suitable bidding strategy, unit commitment or have an impact on the value o f the spot price. For these reasons this study investigates the importance o f wind power forecasts for such players as the Transmission System Operators (TSOs) and Independent Power Producers (IPPs). Investigation in this study is also conducted into the impacts that wind power forecasts can have on the electricity market in relation to bidding strategies, spot price and unit commitment by examining various case studies. The results o f these case studies portray a clear and insightful indication o f the significance o f availing from the information available from wind power forecasts. The accuracy o f a particular wind power forecast is also explored. Data from a wind power forecast is examined in the circumstances o f both 24 and 48 hour forecasts. The accuracy o f the wind power forecasts are displayed through a variety o f statistical approaches. The results o f the investigation can assist market participants taking part in the electricity pool and also provides a platform that can be applied to any forecast when attempting to define its accuracy. This study contributes significantly to the knowledge in the area o f wind power forecasts by explaining the importance o f wind power forecasting within the energy sector. It innovativeness and uniqueness lies in determining the accuracy o f a particular wind power forecast that was previously unknown.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Distribution systems, eigenvalue analysis, nodal admittance matrix, power quality, spectral decomposition

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Background:Circulatory power (CP) and ventilatory power (VP) are indices that have been used for the clinical evaluation of patients with heart failure; however, no study has evaluated these indices in patients with coronary artery disease (CAD) without heart failure.Objective:To characterize both indices in patients with CAD compared with healthy controls.Methods:Eighty-seven men [CAD group = 42 subjects and healthy control group (CG) = 45 subjects] aged 40–65 years were included. Cardiopulmonary exercise testing was performed on a treadmill and the following parameters were measured: 1) peak oxygen consumption (VO2), 2) peak heart rate (HR), 3) peak blood pressure (BP), 4) peak rate-pressure product (peak systolic HR x peak BP), 5) peak oxygen pulse (peak VO2/peak HR), 6) oxygen uptake efficiency (OUES), 7) carbon dioxide production efficiency (minute ventilation/carbon dioxide production slope), 8) CP (peak VO2 x peak systolic BP) and 9) VP (peak systolic BP/carbon dioxide production efficiency).Results:The CAD group had significantly lower values for peak VO2 (p < 0.001), peak HR (p < 0.001), peak systolic BP (p < 0.001), peak rate-pressure product (p < 0.001), peak oxygen pulse (p = 0.008), OUES (p < 0.001), CP (p < 0.001), and VP (p < 0.001) and significantly higher values for peak diastolic BP (p = 0.004) and carbon dioxide production efficiency (p < 0.001) compared with CG. Stepwise regression analysis showed that CP was influenced by group (R2 = 0.44, p < 0.001) and VP was influenced by both group and number of vessels with stenosis after treatment (interaction effects: R2 = 0.46, p < 0.001).Conclusion:The indices CP and VP were lower in men with CAD than healthy controls.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This article is devoted to the research of VoIP transmission quality over Digital Power Line Carrier channels. Assessment of quality transmission is performed using E-model. Paper considers the possibility of joint using of Digital Power Line carrier equipment with different architecture in one network. As a result of the research, the rule for constructing of multi-segment Digital Power Line Carrier channels was formulated. This rule allows minimizing the transmission delay and saving frequency resources of high voltage Power Line Carrier range.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This article is devoted to the research of channel efficiency for IP-traffic transmission over Digital Power Line Carrier channels. The application of serial WAN connections and header compression as methods to increase channel efficiency is considered. According to the results of the research an effective solution for network traffic transmission in DPLC networks was proposed.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Magdeburg, Univ., Fak. für Wirtschaftswiss., Diss., 2014

Relevância:

20.00% 20.00%

Publicador:

Resumo:

On the basis of the global use of the FAM installations and systems, their location, type and height, a lightning protection system is required, which protects humans and machines from danger of a lightning. At the beginning the development, the threat and the potential for destruction of lightning are described. Besides the resulting solid normative calculation models and tables are presented. Then, the product portfolio of the FAM is characterized. From this demonstration models are selected; on the one hand a model for the totality of all portable devices and the other hand a model which defines the power plants. Subsequently, the risk management and a strengths and weaknesses analysis are performed. After that, with graphical and mathematical models, these weaknesses as well as the functional equipotential bonding and grounding system and its dimensioning are investigated and solutions are demonstrated. The following is a coordination of lightning protection standard with other directives and standards in order to classify the resulting internal lightning protection and protective measures against electromagnetic pulse and to generate a uniform protection and application platform. Furthermore, problems of the economy of protective circuit is shown and a solution is given. Finally, the indicated possible solutions are replaced by definitions. The main classification and structure of lightning protection directive are shown and exemplarily applied.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Magdeburg, Univ., Fak. für Wirtschaftswiss., Diss., 2014

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Magdeburg, Univ., Fak. für Informatik, Diss., 2015

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The impact of a power plant cooling system in the Bahía Blanca estuary (Argentina) on the survival of target zooplanktonic organisms (copepods and crustacean larvae) and on overall mesozooplankton abundance was evaluated over time. Mortality rates were calculated for juveniles and adults of four key species in the estuary: Acartia tonsa Dana, 1849 and Eurytemora americana Williams, 1906 (native and invading copepods), and larvae of the crab Chasmagnathus granulata Dana, 1851 and the invading cirriped Balanus glandula Darwin, 1854. Mean total mortality values were up to four times higher at the water discharge site than at intake, though for all four species, significant differences were only registered in post-capture mortality. The findings show no evidence of greater larval sensitivity. As expected, the sharpest decrease in overall mesozooplankton abundance was found in areas close to heated water discharge.