11 resultados para Law of large numbers
em Universidad Politécnica de Madrid
Resumo:
This paper reports on the IES-UPM experience from 2006 to 2010 in the field of the characterization of PV arrays of commercial large PV plants installed in Spain within the framework of the profitable economic scenarios associated to feed-in tariff laws. This experience has extended to 200 MW and has provided valuable lessons to minimize uncertainty, which plays a key role in quality assurance procedures. The paper deals not only with classic I–V measurements but also with watt-metering-based procedures. Particular attention is paid to the selection of irradiance and cell temperature sensors
Resumo:
Las gemas se evalúan mediante la norma de clasificación visual (UNE 56544), pero su aplicación en estructuras existentes y grandes escuadrías resulta poco eficaz y conduce a estimaciones demasiado conservadoras. Este trabajo analiza la influencia de las gemas comparando la resistencia de piezas con gemas y piezas correctamente escuadradas. Se han analizado 218 piezas de pino silvestre con dimensiones nominales 150 x 200 x 4.200 mm, de las que 102 presentaban una gema completa a lo largo de toda su longitud y el resto estaban correctamente escuadradas. En las piezas con gema se ha medido la altura de la sección cada 30 cm (altura en cada cara y altura máxima). Para determinar la resistencia se han ensayado todas las piezas de acuerdo a la norma EN 408. Se ha comparado la resistencia obtenida para las piezas con gema, diferenciando si la gema se encuentra en el borde comprimido o en el borde traccionado, con las piezas escuadradas. Puede concluirse que la presencia de gemas disminuye la resistencia excepto si la gema se encuentra en el borde traccionado, en cuyo caso los resultados obtenidos han sido similares a los de las piezas escuadradas. The wanes on structural timber are evaluated according to the visual grading standard (UNE 56544), but its application on existing structures and large cross sections is ineffective and leads to conservative estimations. This paper analyzes the influence of the wanes by comparing the resistance of pieces with wanes and square pieces. 218 pieces of Scotch pine with nominal dimensions 150 x 200 x 4200 mm have been analyzed, 102 of them had a complete wane along its length and the rest were properly squared. The height of the cross section was measured every 30 cm (the height on each side and the maximum height) for the pieces with wane. The bending strength of all the pieces was obtained according to the EN 408 standard. The bending strength of the pieces with wane has been compared with the strength of the squared pieces, taking into account if the wane is positioned on the compressed edge or on the tensioned edge. It can be concluded that the bending strength of the pieces with wanes is lower than the one of squared pieces, except if the wanes are on the tensioned edge of the beam.
Resumo:
Large cross section structural timber have been used in many structures over long periods of time and still make up an important part of the market due to its mechanical properties. Furthermore, it is frequent its employment in new construction site. It involves the need for a visual grading standard for timber used in construction according to the quality assessment. The material has to satisfy the requirements according to the currently regulations. UNE 56544 is the Spanish visual grading standard for coniferous structural timber. The 2007 version defined a new visual grade in the standard for large section termed Structural Large Timber (MEG). This research checks the new visual grading and consists of 116 structural size specimens of sawn coniferous timber of Scotch pine (Pinus sylvestris L.) from Segovia, Spain. The pieces had a cross section of 150 by 200 mm. They were visually graded according to UNE 56544:2007. Also, mechanical properties have been obtained according to standard EN 408. The results show very low output with an excessive percentage of rejected pieces (33%). The main reasons for the rejection of pieces are fissures and twist
Resumo:
In the year 1999 approves the Law of Construction Building (LOE, in Spanish) to regulate a sector such as construction, which contained some shortcomings from the legal point of view. Currently, the LOE has been in force 12 years, changing the spanish world of the construction, due to influenced by internationalization. Within the LOE, there regulating the different actors involved in the construction building, as the Projects design, the Director of Construction, the developer, The builder, Director of execution of the construction (actor only in Spain, similar as construcion engineer and abroad in), control entities and the users, but lacks figure Project manager will assume the delegation of the promoter helping and you organize, direct and management the process. This figure assumes that the market and contracts are not legally regulated in Spain, then should define and establish its regulation in the LOE. (Spain Construction Law) The translation in spanish of the words "Project Manager is owed to Professor Rafael de Heredia in his book Integrated Project Management, as agent acting on behalf of the organization and promoter assuming control of the project, ie Integraded Project Management . Already exist in Spain, AEDIP (Spanish Association Integrated of Project Construction management) which comprises the major companies in “Project Management” in Spain, and MeDIP (Master in Integrated Construction Project) the largest and most advanced studies at the Polytechnic University of Madrid, in "Construction Project Management" they teach which is also in Argentina. The Integrated Project ("Project Management") applied to the construction process is a methodological technique that helps to organize, control and manage the resources of the promoters in the building process. When resources are limited (which is usually most situations) to manage them efficiently becomes very important. Well, we find that in this situation, the resources are not only limited, but it is limited, so a comprehensive control and monitoring of them becomes not only important if not crucial. The alternative of starting from scratch with a team that specializes in developing these follow directly intervening to ensure that scarce resources are used in the best possible way requires the use of a specific methodology (Manual DIP, Matrix Foreign EDR breakdown structure EDP Project, Risk Management and Control, Design Management, et ..), that is the methodology used by "Projects managers" to ensure that the initial objectives of the promoters or investors are met and all actors in process, from design to construction company have the mind aim of the project will do, trying to get their interests do not prevail over the interests of the project. Among the agents listed in the building process, "Project Management" or DIPE (Director Comprehensive building process, a proposed name for possible incorporation into the LOE, ) currently not listed as such in the LOE (Act on Construction Planning ), one of the agents that exist within the building process is not regulated from the legal point of view, no obligations, ie, as is required by law to have a project, a builder, a construction management, etc. DIPE only one who wants to hire you as have been advanced knowledge of their services by the clients they have been hiring these agents, there being no legal obligation as mentioned above, then the market is dictating its ruling on this new figure, as if it were necessary, he was not hired and eventually disappeared from the building process. As the aim of this article is regular the process and implement the name of DIPE in the Spanish Law of buildings construction (LOE)
Resumo:
We establish a refined version of the Second Law of Thermodynamics for Langevin stochastic processes describing mesoscopic systems driven by conservative or non-conservative forces and interacting with thermal noise. The refinement is based on the Monge-Kantorovich optimal mass transport and becomes relevant for processes far from quasi-stationary regime. General discussion is illustrated by numerical analysis of the optimal memory erasure protocol for a model for micron-size particle manipulated by optical tweezers.
Resumo:
A new method to study large scale neural networks is presented in this paper. The basis is the use of Feynman- like diagrams. These diagrams allow the analysis of collective and cooperative phenomena with a similar methodology to the employed in the Many Body Problem. The proposed method is applied to a very simple structure composed by an string of neurons with interaction among them. It is shown that a new behavior appears at the end of the row. This behavior is different to the initial dynamics of a single cell. When a feedback is present, as in the case of the hippocampus, this situation becomes more complex with a whole set of new frequencies, different from the proper frequencies of the individual neurons. Application to an optical neural network is reported.
Resumo:
Strict technical quality assurance procedures are essential for PV plant bankability. When large-scale PV plants are concerned, this is typically accomplished in three consecutive phases: an energy yield forecast, that is performed at the beginning of the project and is typically accomplished by means of a simulation exercise performed with dedicated software; a reception test campaign, that is performed at the end of the commissioning and consists of a set of tests for determining the efficiency and the reliability of the PV plant devices; and a performance analysis of the first years of operation, that consists in comparing the real energy production with the one calculated from the recorded operating conditions and taking into account the maintenance records. In the last six years, IES-UPM has offered both indoor and on-site quality control campaigns for more than 60 PV plants, with an accumulated power of more than 300 MW, in close contact with Engineering, Procurement and Construction Contractors and financial entities. This paper presents the lessons learned from such experience.
Resumo:
Large-scale circulations patterns (ENSO, NAO) have been shown to have a significant impact on seasonal weather, and therefore on crop yield over many parts of the world(Garnett and Khandekar, 1992; Aasa et al., 2004; Rozas and Garcia-Gonzalez, 2012). In this study, we analyze the influence of large-scale circulation patterns and regional climate on the principal components of maize yield variability in Iberian Peninsula (IP) using reanalysis datasets. Additionally, we investigate the modulation of these relationships by multidecadal patterns. This study is performed analyzing long time series of maize yield, only climate dependent, computed with the crop model CERES-maize (Jones and Kiniry, 1986) included in Decision Support System for Agrotechnology Transfer (DSSAT v.4.5).
Resumo:
Con el auge del Cloud Computing, las aplicaciones de proceso de datos han sufrido un incremento de demanda, y por ello ha cobrado importancia lograr m�ás eficiencia en los Centros de Proceso de datos. El objetivo de este trabajo es la obtenci�ón de herramientas que permitan analizar la viabilidad y rentabilidad de diseñar Centros de Datos especializados para procesamiento de datos, con una arquitectura, sistemas de refrigeraci�ón, etc. adaptados. Algunas aplicaciones de procesamiento de datos se benefician de las arquitecturas software, mientras que en otras puede ser m�ás eficiente un procesamiento con arquitectura hardware. Debido a que ya hay software con muy buenos resultados en el procesamiento de grafos, como el sistema XPregel, en este proyecto se realizará una arquitectura hardware en VHDL, implementando el algoritmo PageRank de Google de forma escalable. Se ha escogido este algoritmo ya que podr��á ser m�ás eficiente en arquitectura hardware, debido a sus características concretas que se indicaráan m�ás adelante. PageRank sirve para ordenar las p�áginas por su relevancia en la web, utilizando para ello la teorí��a de grafos, siendo cada página web un vértice de un grafo; y los enlaces entre páginas, las aristas del citado grafo. En este proyecto, primero se realizará un an�álisis del estado de la técnica. Se supone que la implementaci�ón en XPregel, un sistema de procesamiento de grafos, es una de las m�ás eficientes. Por ello se estudiará esta �ultima implementaci�ón. Sin embargo, debido a que Xpregel procesa, en general, algoritmos que trabajan con grafos; no tiene en cuenta ciertas caracterí��sticas del algoritmo PageRank, por lo que la implementaci�on no es �optima. Esto es debido a que en PageRank, almacenar todos los datos que manda un mismo v�értice es un gasto innecesario de memoria ya que todos los mensajes que manda un vértice son iguales entre sí e iguales a su PageRank. Se realizará el diseño en VHDL teniendo en cuenta esta caracter��ística del citado algoritmo,evitando almacenar varias veces los mensajes que son iguales. Se ha elegido implementar PageRank en VHDL porque actualmente las arquitecturas de los sistemas operativos no escalan adecuadamente. Se busca evaluar si con otra arquitectura se obtienen mejores resultados. Se realizará un diseño partiendo de cero, utilizando la memoria ROM de IPcore de Xillinx (Software de desarrollo en VHDL), generada autom�áticamente. Se considera hacer cuatro tipos de módulos para que as�� el procesamiento se pueda hacer en paralelo. Se simplificar�á la estructura de XPregel con el fin de intentar aprovechar la particularidad de PageRank mencionada, que hace que XPregel no le saque el m�aximo partido. Despu�és se escribirá el c�ódigo, realizando una estructura escalable, ya que en la computación intervienen millones de páginas web. A continuación, se sintetizar�á y se probará el código en una FPGA. El �ultimo paso será una evaluaci�ón de la implementaci�ón, y de posibles mejoras en cuanto al consumo.
Resumo:
El agotamiento, la ausencia o, simplemente, la incertidumbre sobre la cantidad de las reservas de combustibles fósiles se añaden a la variabilidad de los precios y a la creciente inestabilidad en la cadena de aprovisionamiento para crear fuertes incentivos para el desarrollo de fuentes y vectores energéticos alternativos. El atractivo de hidrógeno como vector energético es muy alto en un contexto que abarca, además, fuertes inquietudes por parte de la población sobre la contaminación y las emisiones de gases de efecto invernadero. Debido a su excelente impacto ambiental, la aceptación pública del nuevo vector energético dependería, a priori, del control de los riesgos asociados su manipulación y almacenamiento. Entre estos, la existencia de un innegable riesgo de explosión aparece como el principal inconveniente de este combustible alternativo. Esta tesis investiga la modelización numérica de explosiones en grandes volúmenes, centrándose en la simulación de la combustión turbulenta en grandes dominios de cálculo en los que la resolución que es alcanzable está fuertemente limitada. En la introducción, se aborda una descripción general de los procesos de explosión. Se concluye que las restricciones en la resolución de los cálculos hacen necesario el modelado de los procesos de turbulencia y de combustión. Posteriormente, se realiza una revisión crítica de las metodologías disponibles tanto para turbulencia como para combustión, que se lleva a cabo señalando las fortalezas, deficiencias e idoneidad de cada una de las metodologías. Como conclusión de esta investigación, se obtiene que la única estrategia viable para el modelado de la combustión, teniendo en cuenta las limitaciones existentes, es la utilización de una expresión que describa la velocidad de combustión turbulenta en función de distintos parámetros. Este tipo de modelos se denominan Modelos de velocidad de llama turbulenta y permiten cerrar una ecuación de balance para la variable de progreso de combustión. Como conclusión también se ha obtenido, que la solución más adecuada para la simulación de la turbulencia es la utilización de diferentes metodologías para la simulación de la turbulencia, LES o RANS, en función de la geometría y de las restricciones en la resolución de cada problema particular. Sobre la base de estos hallazgos, el crea de un modelo de combustión en el marco de los modelos de velocidad de la llama turbulenta. La metodología propuesta es capaz de superar las deficiencias existentes en los modelos disponibles para aquellos problemas en los que se precisa realizar cálculos con una resolución moderada o baja. Particularmente, el modelo utiliza un algoritmo heurístico para impedir el crecimiento del espesor de la llama, una deficiencia que lastraba el célebre modelo de Zimont. Bajo este enfoque, el énfasis del análisis se centra en la determinación de la velocidad de combustión, tanto laminar como turbulenta. La velocidad de combustión laminar se determina a través de una nueva formulación capaz de tener en cuenta la influencia simultánea en la velocidad de combustión laminar de la relación de equivalencia, la temperatura, la presión y la dilución con vapor de agua. La formulación obtenida es válida para un dominio de temperaturas, presiones y dilución con vapor de agua más extenso de cualquiera de las formulaciones previamente disponibles. Por otra parte, el cálculo de la velocidad de combustión turbulenta puede ser abordado mediante el uso de correlaciones que permiten el la determinación de esta magnitud en función de distintos parámetros. Con el objetivo de seleccionar la formulación más adecuada, se ha realizado una comparación entre los resultados obtenidos con diversas expresiones y los resultados obtenidos en los experimentos. Se concluye que la ecuación debida a Schmidt es la más adecuada teniendo en cuenta las condiciones del estudio. A continuación, se analiza la importancia de las inestabilidades de la llama en la propagación de los frentes de combustión. Su relevancia resulta significativa para mezclas pobres en combustible en las que la intensidad de la turbulencia permanece moderada. Estas condiciones son importantes dado que son habituales en los accidentes que ocurren en las centrales nucleares. Por ello, se lleva a cabo la creación de un modelo que permita estimar el efecto de las inestabilidades, y en concreto de la inestabilidad acústica-paramétrica, en la velocidad de propagación de llama. El modelado incluye la derivación matemática de la formulación heurística de Bauwebs et al. para el cálculo de la incremento de la velocidad de combustión debido a las inestabilidades de la llama, así como el análisis de la estabilidad de las llamas con respecto a una perturbación cíclica. Por último, los resultados se combinan para concluir el modelado de la inestabilidad acústica-paramétrica. Tras finalizar esta fase, la investigación se centro en la aplicación del modelo desarrollado en varios problemas de importancia para la seguridad industrial y el posterior análisis de los resultados y la comparación de los mismos con los datos experimentales correspondientes. Concretamente, se abordo la simulación de explosiones en túneles y en contenedores, con y sin gradiente de concentración y ventilación. Como resultados generales, se logra validar el modelo confirmando su idoneidad para estos problemas. Como última tarea, se ha realizado un analisis en profundidad de la catástrofe de Fukushima-Daiichi. El objetivo del análisis es determinar la cantidad de hidrógeno que explotó en el reactor número uno, en contraste con los otros estudios sobre el tema que se han centrado en la determinación de la cantidad de hidrógeno generado durante el accidente. Como resultado de la investigación, se determinó que la cantidad más probable de hidrogeno que fue consumida durante la explosión fue de 130 kg. Es un hecho notable el que la combustión de una relativamente pequeña cantidad de hidrogeno pueda causar un daño tan significativo. Esta es una muestra de la importancia de este tipo de investigaciones. Las ramas de la industria para las que el modelo desarrollado será de interés abarca la totalidad de la futura economía de hidrógeno (pilas de combustible, vehículos, almacenamiento energético, etc) con un impacto especial en los sectores del transporte y la energía nuclear, tanto para las tecnologías de fisión y fusión. ABSTRACT The exhaustion, absolute absence or simply the uncertainty on the amount of the reserves of fossil fuels sources added to the variability of their prices and the increasing instability and difficulties on the supply chain are strong incentives for the development of alternative energy sources and carriers. The attractiveness of hydrogen in a context that additionally comprehends concerns on pollution and emissions is very high. Due to its excellent environmental impact, the public acceptance of the new energetic vector will depend on the risk associated to its handling and storage. Fromthese, the danger of a severe explosion appears as the major drawback of this alternative fuel. This thesis investigates the numerical modeling of large scale explosions, focusing on the simulation of turbulent combustion in large domains where the resolution achievable is forcefully limited. In the introduction, a general description of explosion process is undertaken. It is concluded that the restrictions of resolution makes necessary the modeling of the turbulence and combustion processes. Subsequently, a critical review of the available methodologies for both turbulence and combustion is carried out pointing out their strengths and deficiencies. As a conclusion of this investigation, it appears clear that the only viable methodology for combustion modeling is the utilization of an expression for the turbulent burning velocity to close a balance equation for the combustion progress variable, a model of the Turbulent flame velocity kind. Also, that depending on the particular resolution restriction of each problem and on its geometry the utilization of different simulation methodologies, LES or RANS, is the most adequate solution for modeling the turbulence. Based on these findings, the candidate undertakes the creation of a combustion model in the framework of turbulent flame speed methodology which is able to overcome the deficiencies of the available ones for low resolution problems. Particularly, the model utilizes a heuristic algorithm to maintain the thickness of the flame brush under control, a serious deficiency of the Zimont model. Under the approach utilized by the candidate, the emphasis of the analysis lays on the accurate determination of the burning velocity, both laminar and turbulent. On one side, the laminar burning velocity is determined through a newly developed correlation which is able to describe the simultaneous influence of the equivalence ratio, temperature, steam dilution and pressure on the laminar burning velocity. The formulation obtained is valid for a larger domain of temperature, steam dilution and pressure than any of the previously available formulations. On the other side, a certain number of turbulent burning velocity correlations are available in the literature. For the selection of the most suitable, they have been compared with experiments and ranked, with the outcome that the formulation due to Schmidt was the most adequate for the conditions studied. Subsequently, the role of the flame instabilities on the development of explosions is assessed. Their significance appears to be of importance for lean mixtures in which the turbulence intensity remains moderate. These are important conditions which are typical for accidents on Nuclear Power Plants. Therefore, the creation of a model to account for the instabilities, and concretely, the acoustic parametric instability is undertaken. This encloses the mathematical derivation of the heuristic formulation of Bauwebs et al. for the calculation of the burning velocity enhancement due to flame instabilities as well as the analysis of the stability of flames with respect to a cyclic velocity perturbation. The results are combined to build a model of the acoustic-parametric instability. The following task in this research has been to apply the model developed to several problems significant for the industrial safety and the subsequent analysis of the results and comparison with the corresponding experimental data was performed. As a part of such task simulations of explosions in a tunnel and explosions in large containers, with and without gradient of concentration and venting have been carried out. As a general outcome, the validation of the model is achieved, confirming its suitability for the problems addressed. As a last and final undertaking, a thorough study of the Fukushima-Daiichi catastrophe has been carried out. The analysis performed aims at the determination of the amount of hydrogen participating on the explosion that happened in the reactor one, in contrast with other analysis centered on the amount of hydrogen generated during the accident. As an outcome of the research, it was determined that the most probable amount of hydrogen exploding during the catastrophe was 130 kg. It is remarkable that the combustion of such a small quantity of material can cause tremendous damage. This is an indication of the importance of these types of investigations. The industrial branches that can benefit from the applications of the model developed in this thesis include the whole future hydrogen economy, as well as nuclear safety both in fusion and fission technology.
Resumo:
A generalized methodology to design low-profile transmitarray (TA) antennas made of several stacked layers with nonresonant printed phasing elements is presented. A study of the unit cell bandwidth, phase-shift range and tolerances has been conducted considering different numbers of layers. A structure with three metalized layers with capacitive and inductive elements enabling a phase range of nearly 360° and low insertion loss is introduced. A study of the four-layer structure shows improvement in the performance of the unit cells in terms of bandwidth from 2% to more than 20% and a complete phase coverage. Implementations on a flexible substrate of TAs with progressive phase shift operating at 19 GHz are used for validation.