15 resultados para side effect

em Universidad Politécnica de Madrid


Relevância:

70.00% 70.00%

Publicador:

Resumo:

It has been shown that it is possible to exploit Independent/Restricted And-parallelism in logic programs while retaining the conventional "don't know" semantics of such programs. In particular, it is possible to parallelize pure Prolog programs while maintaining the semantics of the language. However, when builtin side-effects (such as write or assert) appear in the program, if an identical observable behaviour to that of sequential Prolog implementations is to be preserved, such side-effects have to be properly sequenced. Previously proposed solutions to this problem are either incomplete (lacking, for example, backtracking semantics) or they force sequentialization of significant portions of the execution graph which could otherwise run in parallel. In this paper a series of side-effect synchronization methods are proposed which incur lower overhead and allow more parallelism than those previously proposed. Most importantly, and unlike previous proposals, they have well-defined backward execution behaviour and require only a small modification to a given (And-parallel) Prolog implementation.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Management of certain populations requires the preservation of its pure genetic background. When, for different reasons, undesired alleles are introduced, the original genetic conformation must be recovered. The present study tested, through computer simulations, the power of recovery (the ability for removing the foreign information) from genealogical data. Simulated scenarios comprised different numbers of exogenous individuals taking partofthe founder population anddifferent numbers of unmanaged generations before the removal program started. Strategies were based on variables arising from classical pedigree analyses such as founders? contribution and partial coancestry. The ef?ciency of the different strategies was measured as the proportion of native genetic information remaining in the population. Consequences on the inbreeding and coancestry levels of the population were also evaluated. Minimisation of the exogenous founders? contributions was the most powerful method, removing the largest amount of genetic information in just one generation.However, as a side effect, it led to the highest values of inbreeding. Scenarios with a large amount of initial exogenous alleles (i.e. high percentage of non native founders), or many generations of mixing became very dif?cult to recover, pointing out the importance of being careful about introgression events in population

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Esta Tesis trata sobre el desarrollo y crecimiento -mediante tecnología MOVPE (del inglés: MetalOrganic Vapor Phase Epitaxy)- de células solares híbridas de semiconductores III-V sobre substratos de silicio. Esta integración pretende ofrecer una alternativa a las células actuales de III-V, que, si bien ostentan el récord de eficiencia en dispositivos fotovoltaicos, su coste es, a día de hoy, demasiado elevado para ser económicamente competitivo frente a las células convencionales de silicio. De este modo, este proyecto trata de conjugar el potencial de alta eficiencia ya demostrado por los semiconductores III-V en arquitecturas de células fotovoltaicas multiunión con el bajo coste, la disponibilidad y la abundancia del silicio. La integración de semiconductores III-V sobre substratos de silicio puede afrontarse a través de diferentes aproximaciones. En esta Tesis se ha optado por el desarrollo de células solares metamórficas de doble unión de GaAsP/Si. Mediante esta técnica, la transición entre los parámetros de red de ambos materiales se consigue por medio de la formación de defectos cristalográficos (mayoritariamente dislocaciones). La idea es confinar estos defectos durante el crecimiento de sucesivas capas graduales en composición para que la superficie final tenga, por un lado, una buena calidad estructural, y por otro, un parámetro de red adecuado. Numerosos grupos de investigación han dirigido sus esfuerzos en los últimos años en desarrollar una estructura similar a la que aquí proponemos. La mayoría de éstos se han centrado en entender los retos asociados al crecimiento de materiales III-V, con el fin de conseguir un material de alta calidad cristalográfica. Sin embargo, prácticamente ninguno de estos grupos ha prestado especial atención al desarrollo y optimización de la célula inferior de silicio, cuyo papel va a ser de gran relevancia en el funcionamiento de la célula completa. De esta forma, y con el fin de completar el trabajo hecho hasta el momento en el desarrollo de células de III-V sobre silicio, la presente Tesis se centra, fundamentalmente, en el diseño y optimización de la célula inferior de silicio, para extraer su máximo potencial. Este trabajo se ha estructurado en seis capítulos, ordenados de acuerdo al desarrollo natural de la célula inferior. Tras un capítulo de introducción al crecimiento de semiconductores III-V sobre Si, en el que se describen las diferentes alternativas para su integración; nos ocupamos de la parte experimental, comenzando con una extensa descripción y caracterización de los substratos de silicio. De este modo, en el Capítulo 2 se analizan con exhaustividad los diferentes tratamientos (tanto químicos como térmicos) que deben seguir éstos para garantizar una superficie óptima sobre la que crecer epitaxialmente el resto de la estructura. Ya centrados en el diseño de la célula inferior, el Capítulo 3 aborda la formación de la unión p-n. En primer lugar se analiza qué configuración de emisor (en términos de dopaje y espesor) es la más adecuada para sacar el máximo rendimiento de la célula inferior. En este primer estudio se compara entre las diferentes alternativas existentes para la creación del emisor, evaluando las ventajas e inconvenientes que cada aproximación ofrece frente al resto. Tras ello, se presenta un modelo teórico capaz de simular el proceso de difusión de fosforo en silicio en un entorno MOVPE por medio del software Silvaco. Mediante este modelo teórico podemos determinar qué condiciones experimentales son necesarias para conseguir un emisor con el diseño seleccionado. Finalmente, estos modelos serán validados y constatados experimentalmente mediante la caracterización por técnicas analíticas (i.e. ECV o SIMS) de uniones p-n con emisores difundidos. Uno de los principales problemas asociados a la formación del emisor por difusión de fósforo, es la degradación superficial del substrato como consecuencia de su exposición a grandes concentraciones de fosfina (fuente de fósforo). En efecto, la rugosidad del silicio debe ser minuciosamente controlada, puesto que éste servirá de base para el posterior crecimiento epitaxial y por tanto debe presentar una superficie prístina para evitar una degradación morfológica y cristalográfica de las capas superiores. En este sentido, el Capítulo 4 incluye un análisis exhaustivo sobre la degradación morfológica de los substratos de silicio durante la formación del emisor. Además, se proponen diferentes alternativas para la recuperación de la superficie con el fin de conseguir rugosidades sub-nanométricas, que no comprometan la calidad del crecimiento epitaxial. Finalmente, a través de desarrollos teóricos, se establecerá una correlación entre la degradación morfológica (observada experimentalmente) con el perfil de difusión del fósforo en el silicio y por tanto, con las características del emisor. Una vez concluida la formación de la unión p-n propiamente dicha, se abordan los problemas relacionados con el crecimiento de la capa de nucleación de GaP. Por un lado, esta capa será la encargada de pasivar la subcélula de silicio, por lo que su crecimiento debe ser regular y homogéneo para que la superficie de silicio quede totalmente pasivada, de tal forma que la velocidad de recombinación superficial en la interfaz GaP/Si sea mínima. Por otro lado, su crecimiento debe ser tal que minimice la aparición de los defectos típicos de una heteroepitaxia de una capa polar sobre un substrato no polar -denominados dominios de antifase-. En el Capítulo 5 se exploran diferentes rutinas de nucleación, dentro del gran abanico de posibilidades existentes, para conseguir una capa de GaP con una buena calidad morfológica y estructural, que será analizada mediante diversas técnicas de caracterización microscópicas. La última parte de esta Tesis está dedicada al estudio de las propiedades fotovoltaicas de la célula inferior. En ella se analiza la evolución de los tiempos de vida de portadores minoritarios de la base durante dos etapas claves en el desarrollo de la estructura Ill-V/Si: la formación de la célula inferior y el crecimiento de las capas III-V. Este estudio se ha llevado a cabo en colaboración con la Universidad de Ohio, que cuentan con una gran experiencia en el crecimiento de materiales III-V sobre silicio. Esta tesis concluye destacando las conclusiones globales del trabajo realizado y proponiendo diversas líneas de trabajo a emprender en el futuro. ABSTRACT This thesis pursues the development and growth of hybrid solar cells -through Metal Organic Vapor Phase Epitaxy (MOVPE)- formed by III-V semiconductors on silicon substrates. This integration aims to provide an alternative to current III-V cells, which, despite hold the efficiency record for photovoltaic devices, their cost is, today, too high to be economically competitive to conventional silicon cells. Accordingly, the target of this project is to link the already demonstrated efficiency potential of III-V semiconductor multijunction solar cell architectures with the low cost and unconstrained availability of silicon substrates. Within the existing alternatives for the integration of III-V semiconductors on silicon substrates, this thesis is based on the metamorphic approach for the development of GaAsP/Si dual-junction solar cells. In this approach, the accommodation of the lattice mismatch is handle through the appearance of crystallographic defects (namely dislocations), which will be confined through the incorporation of a graded buffer layer. The resulting surface will have, on the one hand a good structural quality; and on the other hand the desired lattice parameter. Different research groups have been working in the last years in a structure similar to the one here described, being most of their efforts directed towards the optimization of the heteroepitaxial growth of III-V compounds on Si, with the primary goal of minimizing the appearance of crystal defects. However, none of these groups has paid much attention to the development and optimization of the bottom silicon cell, which, indeed, will play an important role on the overall solar cell performance. In this respect, the idea of this thesis is to complete the work done so far in this field by focusing on the design and optimization of the bottom silicon cell, to harness its efficiency. This work is divided into six chapters, organized according to the natural progress of the bottom cell development. After a brief introduction to the growth of III-V semiconductors on Si substrates, pointing out the different alternatives for their integration; we move to the experimental part, which is initiated by an extensive description and characterization of silicon substrates -the base of the III-V structure-. In this chapter, a comprehensive analysis of the different treatments (chemical and thermal) required for preparing silicon surfaces for subsequent epitaxial growth is presented. Next step on the development of the bottom cell is the formation of the p-n junction itself, which is faced in Chapter 3. Firstly, the optimization of the emitter configuration (in terms of doping and thickness) is handling by analytic models. This study includes a comparison between the different alternatives for the emitter formation, evaluating the advantages and disadvantages of each approach. After the theoretical design of the emitter, it is defined (through the modeling of the P-in-Si diffusion process) a practical parameter space for the experimental implementation of this emitter configuration. The characterization of these emitters through different analytical tools (i.e. ECV or SIMS) will validate and provide experimental support for the theoretical models. A side effect of the formation of the emitter by P diffusion is the roughening of the Si surface. Accordingly, once the p-n junction is formed, it is necessary to ensure that the Si surface is smooth enough and clean for subsequent phases. Indeed, the roughness of the Si must be carefully controlled since it will be the basis for the epitaxial growth. Accordingly, after quantifying (experimentally and by theoretical models) the impact of the phosphorus on the silicon surface morphology, different alternatives for the recovery of the surface are proposed in order to achieve a sub-nanometer roughness which does not endanger the quality of the incoming III-V layers. Moving a step further in the development of the Ill-V/Si structure implies to address the challenges associated to the GaP on Si nucleation. On the one hand, this layer will provide surface passivation to the emitter. In this sense, the growth of the III-V layer must be homogeneous and continuous so the Si emitter gets fully passivated, providing a minimal surface recombination velocity at the interface. On the other hand, the growth should be such that the appearance of typical defects related to the growth of a polar layer on a non-polar substrate is minimized. Chapter 5 includes an exhaustive study of the GaP on Si nucleation process, exploring different nucleation routines for achieving a high morphological and structural quality, which will be characterized by means of different microscopy techniques. Finally, an extensive study of the photovoltaic properties of the bottom cell and its evolution during key phases in the fabrication of a MOCVD-grown III-V-on-Si epitaxial structure (i.e. the formation of the bottom cell; and the growth of III-V layers) will be presented in the last part of this thesis. This study was conducted in collaboration with The Ohio State University, who has extensive experience in the growth of III-V materials on silicon. This thesis concludes by highlighting the overall conclusions of the presented work and proposing different lines of work to be undertaken in the future.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This paper presents a project for providing the students of Structural Engineering with the flexibility to learn outside classroom schedules. The goal is a framework for adaptive E-learning based on a repository of open educational courseware with a set of basic Structural Engineering concepts and fundamentals. These are paramount for students to expand their technical knowledge and skills in structural analysis and design of tall buildings, arch-type structures as well as bridges. Thus, concepts related to structural behaviour such as linearity, compatibility, stiffness and influence lines have traditionally been elusive for students. The objective is to facilitate the student a teachinglearning process to acquire the necessary intuitive knowledge, cognitive skills and the basis for further technological modules and professional development in this area. As a side effect, the system is expected to help the students improve their preparation for exams on the subject. In this project, a web-based open-source system for studying influence lines on continuous beams is presented. It encompasses a collection of interactive user-friendly applications accessible via Web, written in JavaScript under JQuery and Dygraph Libraries, taking advantage of their efficiency and graphic capabilities. It is performed in both Spanish and English languages. The student is enabled to set the geometric, topologic, boundary and mechanic layout of a continuous beam. While changing the loading and the support conditions, the changes in the beam response prompt on the screen, so that the effects of the several issues involved in structural analysis become apparent. This open interaction with the user allows the student to simulate and virtually infer the structural response. Different levels of complexity can be handled, whereas an ongoing help is at hand for any of them. Students can freely boost their experiential learning on this subject at their own pace, in order to further share, process, generalize and apply the relevant essential concepts of Structural Engineering analysis. Besides, this collection is being added to the "Virtual Lab of Continuum Mechanics" of the UPM, launched in 2013 (http://serviciosgate.upm.es/laboratoriosvirtuales/laboratorios/medios-continuos-en-construcci%C3%B3n)

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Plant trichomes play important protective functions and may have a major influence on leaf surface wettability. With the aim of gaining insight into trichome structure, composition and function in relation to water-plant surface interactions, we analyzed the adaxial and abaxial leaf surface of Quercus ilex L. (holm oak) as model. By measuring the leaf water potential 24 h after the deposition of water drops on to abaxial and adaxial surfaces, evidence for water penetration through the upper leaf side was gained in young and mature leaves. The structure and chemical composition of the abaxial (always present) and adaxial (occurring only in young leaves) trichomes were analyzed by various microscopic and analytical procedures. The adaxial surfaces were wettable and had a high degree of water drop adhesion in contrast to the highly unwettable and water repellent abaxial holm oak leaf sides. The surface free energy, polarity and solubility parameter decreased with leaf age, with generally higher values determined for the abaxial sides. All holm oak leaf trichomes were covered with a cuticle. The abaxial trichomes were composed of 8% soluble waxes, 49% cutin, and 43% polysaccharides. For the adaxial side, it is concluded that trichomes and the scars after trichome shedding contribute to water uptake, while the abaxial leaf side is highly hydrophobic due to its high degree of pubescence and different trichome structure, composition and density. Results are interpreted in terms of water-plant surface interactions, plant surface physical-chemistry, and plant ecophysiology.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

La termografía infrarroja (TI) es una técnica no invasiva y de bajo coste que permite, con el simple acto de tomar una fotografía, el registro sin contacto de la energía que irradia el cuerpo humano (Akimov & Son’kin, 2011, Merla et al., 2005, Ng et al., 2009, Costello et al., 2012, Hildebrandt et al., 2010). Esta técnica comenzó a utilizarse en el ámbito médico en los años 60, pero debido a los malos resultados como herramienta diagnóstica y la falta de protocolos estandarizados (Head & Elliot, 2002), ésta se dejó de utilizar en detrimento de otras técnicas más precisas a nivel diagnóstico. No obstante, las mejoras tecnológicas de la TI en los últimos años han hecho posible un resurgimiento de la misma (Jiang et al., 2005, Vainer et al., 2005, Cheng et al., 2009, Spalding et al., 2011, Skala et al., 2012), abriendo el camino a nuevas aplicaciones no sólo centradas en el uso diagnóstico. Entre las nuevas aplicaciones, destacamos las que se desarrollan en el ámbito de la actividad física y el deporte, donde recientemente se ha demostrado que los nuevos avances con imágenes de alta resolución pueden proporcionar información muy interesante sobre el complejo sistema de termorregulación humana (Hildebrandt et al., 2010). Entre las nuevas aplicaciones destacan: la cuantificación de la asimilación de la carga de trabajo físico (Čoh & Širok, 2007), la valoración de la condición física (Chudecka et al., 2010, 2012, Akimov et al., 2009, 2011, Merla et al., 2010), la prevención y seguimiento de lesiones (Hildebrandt et al., 2010, 2012, Badža et al., 2012, Gómez Carmona, 2012) e incluso la detección de agujetas (Al-Nakhli et al., 2012). Bajo estas circunstancias, se acusa cada vez más la necesidad de ampliar el conocimiento sobre los factores que influyen en la aplicación de la TI en los seres humanos, así como la descripción de la respuesta de la temperatura de la piel (TP) en condiciones normales, y bajo la influencia de los diferentes tipos de ejercicio. Por consiguiente, este estudio presenta en una primera parte una revisión bibliográfica sobre los factores que afectan al uso de la TI en los seres humanos y una propuesta de clasificación de los mismos. Hemos analizado la fiabilidad del software Termotracker, así como su reproducibilidad de la temperatura de la piel en sujetos jóvenes, sanos y con normopeso. Finalmente, se analizó la respuesta térmica de la piel antes de un entrenamiento de resistencia, velocidad y fuerza, inmediatamente después y durante un período de recuperación de 8 horas. En cuanto a la revisión bibliográfica, hemos propuesto una clasificación para organizar los factores en tres grupos principales: los factores ambientales, individuales y técnicos. El análisis y descripción de estas influencias deben representar la base de nuevas investigaciones con el fin de utilizar la TI en las mejores condiciones. En cuanto a la reproducibilidad, los resultados mostraron valores excelentes para imágenes consecutivas, aunque la reproducibilidad de la TP disminuyó ligeramente con imágenes separadas por 24 horas, sobre todo en las zonas con valores más fríos (es decir, zonas distales y articulaciones). Las asimetrías térmicas (que normalmente se utilizan para seguir la evolución de zonas sobrecargadas o lesionadas) también mostraron excelentes resultados pero, en este caso, con mejores valores para las articulaciones y el zonas centrales (es decir, rodillas, tobillos, dorsales y pectorales) que las Zonas de Interés (ZDI) con valores medios más calientes (como los muslos e isquiotibiales). Los resultados de fiabilidad del software Termotracker fueron excelentes en todas las condiciones y parámetros. En el caso del estudio sobre los efectos de los entrenamientos de la velocidad resistencia y fuerza en la TP, los resultados muestran respuestas específicas según el tipo de entrenamiento, zona de interés, el momento de la evaluación y la función de las zonas analizadas. Los resultados mostraron que la mayoría de las ZDI musculares se mantuvieron significativamente más calientes 8 horas después del entrenamiento, lo que indica que el efecto del ejercicio sobre la TP perdura por lo menos 8 horas en la mayoría de zonas analizadas. La TI podría ser útil para cuantificar la asimilación y recuperación física después de una carga física de trabajo. Estos resultados podrían ser muy útiles para entender mejor el complejo sistema de termorregulación humano, y por lo tanto, para utilizar la TI de una manera más objetiva, precisa y profesional con visos a mejorar las nuevas aplicaciones termográficas en el sector de la actividad física y el deporte Infrared Thermography (IRT) is a safe, non-invasive and low-cost technique that allows the rapid and non-contact recording of the irradiated energy released from the body (Akimov & Son’kin, 2011; Merla et al., 2005; Ng et al., 2009; Costello et al., 2012; Hildebrandt et al., 2010). It has been used since the early 1960’s, but due to poor results as diagnostic tool and a lack of methodological standards and quality assurance (Head et al., 2002), it was rejected from the medical field. Nevertheless, the technological improvements of IRT in the last years have made possible a resurgence of this technique (Jiang et al., 2005; Vainer et al., 2005; Cheng et al., 2009; Spalding et al., 2011; Skala et al., 2012), paving the way to new applications not only focused on the diagnose usages. Among the new applications, we highlighted those in physical activity and sport fields, where it has been recently proven that a high resolution thermal images can provide us with interesting information about the complex thermoregulation system of the body (Hildebrandt et al., 2010), information than can be used as: training workload quantification (Čoh & Širok, 2007), fitness and performance conditions (Chudecka et al., 2010, 2012; Akimov et al., 2009, 2011; Merla et al., 2010; Arfaoui et al., 2012), prevention and monitoring of injuries (Hildebrandt et al., 2010, 2012; Badža et al., 2012, Gómez Carmona, 2012) and even detection of Delayed Onset Muscle Soreness – DOMS- (Al-Nakhli et al., 2012). Under this context, there is a relevant necessity to broaden the knowledge about factors influencing the application of IRT on humans, and to better explore and describe the thermal response of Skin Temperature (Tsk) in normal conditions, and under the influence of different types of exercise. Consequently, this study presents a literature review about factors affecting the application of IRT on human beings and a classification proposal about them. We analysed the reliability of the software Termotracker®, and also its reproducibility of Tsk on young, healthy and normal weight subjects. Finally, we examined the Tsk thermal response before an endurance, speed and strength training, immediately after and during an 8-hour recovery period. Concerning the literature review, we proposed a classification to organise the factors into three main groups: environmental, individual and technical factors. Thus, better exploring and describing these influence factors should represent the basis of further investigations in order to use IRT in the best and optimal conditions to improve its accuracy and results. Regarding the reproducibility results, the outcomes showed excellent values for consecutive images, but the reproducibility of Tsk slightly decreased with time, above all in the colder Regions of Interest (ROI) (i.e. distal and joint areas). The side-to-side differences (ΔT) (normally used to follow the evolution of some injured or overloaded ROI) also showed highly accurate results, but in this case with better values for joints and central ROI (i.e. Knee, Ankles, Dorsal and Pectoral) than the hottest muscle ROI (as Thigh or Hamstrings). The reliability results of the IRT software Termotracker® were excellent in all conditions and parameters. In the part of the study about the effects on Tsk of aerobic, speed and strength training, the results of Tsk demonstrated specific responses depending on the type of training, ROI, moment of the assessment and the function of the considered ROI. The results showed that most of muscular ROI maintained warmer significant Tsk 8 hours after the training, indicating that the effect of exercise on Tsk last at least 8 hours in most of ROI, as well as IRT could help to quantify the recovery status of the athlete as workload assimilation indicator. Those results could be very useful to better understand the complex skin thermoregulation behaviour, and therefore, to use IRT in a more objective, accurate and professional way to improve the new IRT applications for the physical activity and sport sector.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Early propagation effect (EPE) is a critical problem in conventional dual-rail logic implementations against Side Channel Attacks (SCAs). Among previous EPE-resistant architectures, PA-DPL logic offers EPE-free capability at relatively low cost. However, its separate dual core structure is a weakness when facing concentrated EM attacks where a tiny EM probe can be precisely positioned closer to one of the two cores. In this paper, we present an PA-DPL dual-core interleaved structure to strengthen resistance against sophisticated EM attacks on Xilinx FPGA implementations. The main merit of the proposed structure is that every two routing in each signal pair are kept identical even the dual cores are interleaved together. By minimizing the distance between the complementary routings and instances of both cores, even the concentrated EM measurement cannot easily distinguish the minor EM field unbalance. In PA- DPL, EPE is avoided by compressing the evaluation phase to a small portion of the clock period, therefore, the speed is inevitably limited. Regarding this, we made an improvement to extend the duty cycle of evaluation phase to more than 40 percent, yielding a larger maximum working frequency. The detailed design flow is also presented. We validate the security improvement against EM attack by implementing a simplified AES co-processor in Virtex-5 FPGA.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A measurement investigation, at ADIF's test site at the O Eixo viaduct which is on the Spanish Santiago-Ourense high speed railway line, has been carried out during the last year. The main goal of the investigation is to study the effect of the cross-wind on railway overheads (catenaries) and the influence of the presence of windbreaks on the wind-induced motion of the railway overhead. A description of the O Eixo viaduct test site is presented in this paper, including the installed windbreaks, the sensor and power supply systems. Three catenary spans has been instrumented at the center point of the catenary span contact wire with one ultrasonic anemometer and two unidirectional accelerometers. Additionally, another ultrasonic anemometer placed in the central catenary span has been installed to provide reference wind data. Wind roses of wind speed and standard deviation of the accelerometers are presented. As expected, the four wind roses look very similar and the two dominant directions close to the perpendicular to the bridge longitudinal axes, north and south have been identified. The wind roses of the standard deviation of the acceleration shows that the acceleration of the catenary contact wire is related to the directions of the two dominant winds. The vertical standard deviation of the acceleration is higher than the horizontal one for the spans with windbreaks. It has also been observed that the presence of the windbreaks modifies the wind flow leading to a wind-induced motion of the catenary contact wire which shows a higher variability than the corresponding unprotected case. On the one hand, the baseline southerly wind configuration (south wind, windbreaks in the windward side and catenary in the leeward side) influence both the mean speed at the catenary and the turbulence intensity. On the other hand, the northerly wind configuration, windbreaks in the leeward side and catenary in the windward side, provide a reference to the response of the catenary for an unprotected railway overhead, and, as it is expected, the windbreak influence is much more reduced compared to the southerly wind configuration. Both the height of the windbreak and the eaves contribute to the increase in the turbulence intensity at the catenary contact wire height. It can be seen that the height of the windbreak plays a crucial role in the increase of turbulence intensity, much more intense than the presence of the windbreak eave.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Multiple-input multiple-output (MIMO) systems have entailed a great enhancement in wireless communications performances. The use of multiple antennas at each side of the radio link has been included in recent drafts and standards such as WLAN, WIMAX, or DVB-T2. The MIMO performances depend on the antenna array characteristics and thus several aspects have to be taken into account to design MIMO antennas. In the literature, many articles can be found in terms of capacity or antenna design, but in this article, different types of antenna arrays for MIMO systems are measured in a reverberation chamber with and without a phantom as a user's head. As a result, the MIMO performances are degraded by the user in terms of efficiency, diversity gain, and capacity. Omnidirectional antennas such as monopoles with high radiation efficiency offer the highest performance for a rich scattering nonline of sight indoor environment.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The research is an exhaustive study of the microstructure and of the stress-strain curves of structural steel S460N at temperatures typical of a fire. It includes a fractographic study of the fracture suifaces of cylindrical specimens, tensile tested at different fire scenarios, explaining the relationship between the failure micromechanisms and temperature. The paper ends with the comparison between the experimentally found strain-stress curves with that one's proposed by the EUROCODE EC3, resulting that in the case of steel S460N these are on the side ofsafety.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Conventional dual-rail precharge logic suffers from difficult implementations of dual-rail structure for obtaining strict compensation between the counterpart rails. As a light-weight and high-speed dual-rail style, balanced cell-based dual-rail logic (BCDL) uses synchronised compound gates with global precharge signal to provide high resistance against differential power or electromagnetic analyses. BCDL can be realised from generic field programmable gate array (FPGA) design flows with constraints. However, routings still exist as concerns because of the deficient flexibility on routing control, which unfavourably results in bias between complementary nets in security-sensitive parts. In this article, based on a routing repair technique, novel verifications towards routing effect are presented. An 8 bit simplified advanced encryption processing (AES)-co-processor is executed that is constructed on block random access memory (RAM)-based BCDL in Xilinx Virtex-5 FPGAs. Since imbalanced routing are major defects in BCDL, the authors can rule out other influences and fairly quantify the security variants. A series of asymptotic correlation electromagnetic (EM) analyses are launched towards a group of circuits with consecutive routing schemes to be able to verify routing impact on side channel analyses. After repairing the non-identical routings, Mutual information analyses are executed to further validate the concrete security increase obtained from identical routing pairs in BCDL.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The research is an exhaustive study of the microstructure and of the stress-strain curves of structural steel S460N at temperatures typical of a fire. It includes a fractographic study of the fracture suifaces of cylindrical specimens, tensile tested at different fire scenarios, explaining the relationship between the failure micromechanisms and temperature. The paper ends with the comparison between the experimentally found strain-stress curves with that one's proposed by the EUROCODE EC3, resulting that in the case of steel S460N these are on the side ofsafety.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Esta Tesis aborda los problemas de eficiencia de las redes eléctrica desde el punto de vista del consumo. En particular, dicha eficiencia es mejorada mediante el suavizado de la curva de consumo agregado. Este objetivo de suavizado de consumo implica dos grandes mejoras en el uso de las redes eléctricas: i) a corto plazo, un mejor uso de la infraestructura existente y ii) a largo plazo, la reducción de la infraestructura necesaria para suplir las mismas necesidades energéticas. Además, esta Tesis se enfrenta a un nuevo paradigma energético, donde la presencia de generación distribuida está muy extendida en las redes eléctricas, en particular, la generación fotovoltaica (FV). Este tipo de fuente energética afecta al funcionamiento de la red, incrementando su variabilidad. Esto implica que altas tasas de penetración de electricidad de origen fotovoltaico es perjudicial para la estabilidad de la red eléctrica. Esta Tesis trata de suavizar la curva de consumo agregado considerando esta fuente energética. Por lo tanto, no sólo se mejora la eficiencia de la red eléctrica, sino que también puede ser aumentada la penetración de electricidad de origen fotovoltaico en la red. Esta propuesta conlleva grandes beneficios en los campos económicos, social y ambiental. Las acciones que influyen en el modo en que los consumidores hacen uso de la electricidad con el objetivo producir un ahorro energético o un aumento de eficiencia son llamadas Gestión de la Demanda Eléctrica (GDE). Esta Tesis propone dos algoritmos de GDE diferentes para cumplir con el objetivo de suavizado de la curva de consumo agregado. La diferencia entre ambos algoritmos de GDE reside en el marco en el cual estos tienen lugar: el marco local y el marco de red. Dependiendo de este marco de GDE, el objetivo energético y la forma en la que se alcanza este objetivo son diferentes. En el marco local, el algoritmo de GDE sólo usa información local. Este no tiene en cuenta a otros consumidores o a la curva de consumo agregado de la red eléctrica. Aunque esta afirmación pueda diferir de la definición general de GDE, esta vuelve a tomar sentido en instalaciones locales equipadas con Recursos Energéticos Distribuidos (REDs). En este caso, la GDE está enfocada en la maximización del uso de la energía local, reduciéndose la dependencia con la red. El algoritmo de GDE propuesto mejora significativamente el auto-consumo del generador FV local. Experimentos simulados y reales muestran que el auto-consumo es una importante estrategia de gestión energética, reduciendo el transporte de electricidad y alentando al usuario a controlar su comportamiento energético. Sin embargo, a pesar de todas las ventajas del aumento de auto-consumo, éstas no contribuyen al suavizado del consumo agregado. Se han estudiado los efectos de las instalaciones locales en la red eléctrica cuando el algoritmo de GDE está enfocado en el aumento del auto-consumo. Este enfoque puede tener efectos no deseados, incrementando la variabilidad en el consumo agregado en vez de reducirlo. Este efecto se produce porque el algoritmo de GDE sólo considera variables locales en el marco local. Los resultados sugieren que se requiere una coordinación entre las instalaciones. A través de esta coordinación, el consumo debe ser modificado teniendo en cuenta otros elementos de la red y buscando el suavizado del consumo agregado. En el marco de la red, el algoritmo de GDE tiene en cuenta tanto información local como de la red eléctrica. En esta Tesis se ha desarrollado un algoritmo autoorganizado para controlar el consumo de la red eléctrica de manera distribuida. El objetivo de este algoritmo es el suavizado del consumo agregado, como en las implementaciones clásicas de GDE. El enfoque distribuido significa que la GDE se realiza desde el lado de los consumidores sin seguir órdenes directas emitidas por una entidad central. Por lo tanto, esta Tesis propone una estructura de gestión paralela en lugar de una jerárquica como en las redes eléctricas clásicas. Esto implica que se requiere un mecanismo de coordinación entre instalaciones. Esta Tesis pretende minimizar la cantidad de información necesaria para esta coordinación. Para lograr este objetivo, se han utilizado dos técnicas de coordinación colectiva: osciladores acoplados e inteligencia de enjambre. La combinación de estas técnicas para llevar a cabo la coordinación de un sistema con las características de la red eléctrica es en sí mismo un enfoque novedoso. Por lo tanto, este objetivo de coordinación no es sólo una contribución en el campo de la gestión energética, sino también en el campo de los sistemas colectivos. Los resultados muestran que el algoritmo de GDE propuesto reduce la diferencia entre máximos y mínimos de la red eléctrica en proporción a la cantidad de energía controlada por el algoritmo. Por lo tanto, conforme mayor es la cantidad de energía controlada por el algoritmo, mayor es la mejora de eficiencia en la red eléctrica. Además de las ventajas resultantes del suavizado del consumo agregado, otras ventajas surgen de la solución distribuida seguida en esta Tesis. Estas ventajas se resumen en las siguientes características del algoritmo de GDE propuesto: • Robustez: en un sistema centralizado, un fallo o rotura del nodo central provoca un mal funcionamiento de todo el sistema. La gestión de una red desde un punto de vista distribuido implica que no existe un nodo de control central. Un fallo en cualquier instalación no afecta el funcionamiento global de la red. • Privacidad de datos: el uso de una topología distribuida causa de que no hay un nodo central con información sensible de todos los consumidores. Esta Tesis va más allá y el algoritmo propuesto de GDE no utiliza información específica acerca de los comportamientos de los consumidores, siendo la coordinación entre las instalaciones completamente anónimos. • Escalabilidad: el algoritmo propuesto de GDE opera con cualquier número de instalaciones. Esto implica que se permite la incorporación de nuevas instalaciones sin afectar a su funcionamiento. • Bajo coste: el algoritmo de GDE propuesto se adapta a las redes actuales sin requisitos topológicos. Además, todas las instalaciones calculan su propia gestión con un bajo requerimiento computacional. Por lo tanto, no se requiere un nodo central con un alto poder de cómputo. • Rápido despliegue: las características de escalabilidad y bajo coste de los algoritmos de GDE propuestos permiten una implementación rápida. No se requiere una planificación compleja para el despliegue de este sistema. ABSTRACT This Thesis addresses the efficiency problems of the electrical grids from the consumption point of view. In particular, such efficiency is improved by means of the aggregated consumption smoothing. This objective of consumption smoothing entails two major improvements in the use of electrical grids: i) in the short term, a better use of the existing infrastructure and ii) in long term, the reduction of the required infrastructure to supply the same energy needs. In addition, this Thesis faces a new energy paradigm, where the presence of distributed generation is widespread over the electrical grids, in particular, the Photovoltaic (PV) generation. This kind of energy source affects to the operation of the grid by increasing its variability. This implies that a high penetration rate of photovoltaic electricity is pernicious for the electrical grid stability. This Thesis seeks to smooth the aggregated consumption considering this energy source. Therefore, not only the efficiency of the electrical grid is improved, but also the penetration of photovoltaic electricity into the grid can be increased. This proposal brings great benefits in the economic, social and environmental fields. The actions that influence the way that consumers use electricity in order to achieve energy savings or higher efficiency in energy use are called Demand-Side Management (DSM). This Thesis proposes two different DSM algorithms to meet the aggregated consumption smoothing objective. The difference between both DSM algorithms lie in the framework in which they take place: the local framework and the grid framework. Depending on the DSM framework, the energy goal and the procedure to reach this goal are different. In the local framework, the DSM algorithm only uses local information. It does not take into account other consumers or the aggregated consumption of the electrical grid. Although this statement may differ from the general definition of DSM, it makes sense in local facilities equipped with Distributed Energy Resources (DERs). In this case, the DSM is focused on the maximization of the local energy use, reducing the grid dependence. The proposed DSM algorithm significantly improves the self-consumption of the local PV generator. Simulated and real experiments show that self-consumption serves as an important energy management strategy, reducing the electricity transport and encouraging the user to control his energy behavior. However, despite all the advantages of the self-consumption increase, they do not contribute to the smooth of the aggregated consumption. The effects of the local facilities on the electrical grid are studied when the DSM algorithm is focused on self-consumption maximization. This approach may have undesirable effects, increasing the variability in the aggregated consumption instead of reducing it. This effect occurs because the algorithm only considers local variables in the local framework. The results suggest that coordination between these facilities is required. Through this coordination, the consumption should be modified by taking into account other elements of the grid and seeking for an aggregated consumption smoothing. In the grid framework, the DSM algorithm takes into account both local and grid information. This Thesis develops a self-organized algorithm to manage the consumption of an electrical grid in a distributed way. The goal of this algorithm is the aggregated consumption smoothing, as the classical DSM implementations. The distributed approach means that the DSM is performed from the consumers side without following direct commands issued by a central entity. Therefore, this Thesis proposes a parallel management structure rather than a hierarchical one as in the classical electrical grids. This implies that a coordination mechanism between facilities is required. This Thesis seeks for minimizing the amount of information necessary for this coordination. To achieve this objective, two collective coordination techniques have been used: coupled oscillators and swarm intelligence. The combination of these techniques to perform the coordination of a system with the characteristics of the electric grid is itself a novel approach. Therefore, this coordination objective is not only a contribution in the energy management field, but in the collective systems too. Results show that the proposed DSM algorithm reduces the difference between the maximums and minimums of the electrical grid proportionally to the amount of energy controlled by the system. Thus, the greater the amount of energy controlled by the algorithm, the greater the improvement of the efficiency of the electrical grid. In addition to the advantages resulting from the smoothing of the aggregated consumption, other advantages arise from the distributed approach followed in this Thesis. These advantages are summarized in the following features of the proposed DSM algorithm: • Robustness: in a centralized system, a failure or breakage of the central node causes a malfunction of the whole system. The management of a grid from a distributed point of view implies that there is not a central control node. A failure in any facility does not affect the overall operation of the grid. • Data privacy: the use of a distributed topology causes that there is not a central node with sensitive information of all consumers. This Thesis goes a step further and the proposed DSM algorithm does not use specific information about the consumer behaviors, being the coordination between facilities completely anonymous. • Scalability: the proposed DSM algorithm operates with any number of facilities. This implies that it allows the incorporation of new facilities without affecting its operation. • Low cost: the proposed DSM algorithm adapts to the current grids without any topological requirements. In addition, every facility calculates its own management with low computational requirements. Thus, a central computational node with a high computational power is not required. • Quick deployment: the scalability and low cost features of the proposed DSM algorithms allow a quick deployment. A complex schedule of the deployment of this system is not required.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

MIMO techniques allow increasing wireless channel performance by decreasing the BER and increasing the channel throughput and in consequence are included in current mobile communication standards. MIMO techniques are based on benefiting the existence of multipath in wireless communications and the application of appropriate signal processing techniques. The singular value decomposition (SVD) is a popular signal processing technique which, based on the perfect channel state information (PCSI) knowledge at both the transmitter and receiver sides, removes inter-antenna interferences and improves channel performance. Nevertheless, the proximity of the multiple antennas at each front-end produces the so called antennas correlation effect due to the similarity of the various physical paths. In consequence, antennas correlation drops the MIMO channel performance. This investigation focuses on the analysis of a MIMO channel under transmitter-side antennas correlation conditions. First, antennas correlation is analyzed and characterized by the correlation coefficients. The analysis describes the relation between antennas correlation and the appearance of predominant layers which significantly affect the channel performance. Then, based on the SVD, pre- and post-processing is applied to remove inter-antenna interferences. Finally, bit- and power allocation strategies are applied to reach the best performance. The resulting BER reveals that antennas correlation effect diminishes the channel performance and that not necessarily all MIMO layers must be activated to obtain the best performance.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Amblyseius swirskii (Athias-Henriot) is a polyphagous predatory mite which feeds on pollen and small arthropod preys like whiteflies, thrips and mites. This species is widely used in IPM programs in greenhouses, being essential for its success, to obtain information about the non target effects of the pesticides currently used in those crops where the mite is artificially released. This work describes a laboratory contact residual test for evaluating lethal (mortality after 72 hour exposure to fresh residues) and sublethal effects (fecundity and fertility of the surviving mites) of eleven modern pesticides to adults of A. swirskii. Spiromesifen is lipogenesis inhibitor; flonicamid a selective feeding inhibitor with a mode of action not totally known; flubendiamide a modulator of the rhyanodin receptor, sulfoxaflor has a complex mode of action not totally ascertained; metaflumizone is a voltage dependent sodium channel blocker; methoxyfenozide is an IGR, spirotetramat inhibits lipids; abamectin and emamectin activate the Cl- channel; spinosad is a neurotix naturalyte and deltamethrin a pyrethroid used as positive standard. Selected pesticides are effective against different key pests present in horticultural crop areas and were always applied at the maximum field recommended concentration in Spain if registered, or at the concentration recommended by the supplier. Out of the tested pesticides, spiromesifen, flonicamid, flubendiamide, sulfoxaflor, metaflumizone, methoxyfenozide and spirotetramat were harmless to adults of the predatory mite (IOBC toxicity class 1). The rest of pesticides exhibited some negative effects: emamectin was slightly harmful (IOBC 2), deltamethrin moderately harmful (IOBC 3) and spinosad and abamectin harmful (IOBC 4). Further testing under more realistic conditions is needed for those pesticides having some harmful effect on the mite prior deciding their joint use or not. Key words: Amblyseius swirskii, adults, laboratory, residual test, spiromesifen, flonicamid, flubendiamide, sulfoxaflor, metaflumizone, methoxyfenozide, spirotetramat, emamectin, deltamethrin, abamectin, spinosad.