887 resultados para full-scale testing
Resumo:
The threat of impact or explosive loads is regrettably a scenario to be taken into account in the design of lifeline or critical civilian buildings. These are often made of concrete and not specifically designed for military threats. Numerical simulation of such cases may be undertaken with the aid of state of the art explicit dynamic codes, however several difficult challenges are inherent to such models: the material modeling for the concrete anisotropic failure, consideration of reinforcement bars and important structural details, adequate modeling of pressure waves from explosions in complex geometries, and efficient solution to models of complete buildings which can realistically assess failure modes. In this work we employ LS-Dyna for calculation, with Lagrangian finite elements and explicit time integration. Reinforced concrete may be represented in a fairly accurate fashion with recent models such as CSCM model [1] and segregated rebars constrained within the continuum mesh. However, such models cannot be realistically employed for complete models of large buildings, due to limitations of time and computer resources. The use of structural beam and shell elements for this purpose would be the obvious solution, with much lower computational cost. However, this modeling requires careful calibration in order to reproduce adequately the highly nonlinear response of structural concrete members, including bending with and without compression, cracking or plastic crushing, plastic deformation of reinforcement, erosion of vanished elements etc. The main objective of this work is to provide a strategy for modeling such scenarios based on structural elements, using available material models for structural elements [2] and techniques to include the reinforcement in a realistic way. These models are calibrated against fully three-dimensional models and shown to be accurate enough. At the same time they provide the basis for realistic simulation of impact and explosion on full-scale buildings
Resumo:
In the present uncertain global context of reaching an equal social stability and steady thriving economy, power demand expected to grow and global electricity generation could nearly double from 2005 to 2030. Fossil fuels will remain a significant contribution on this energy mix up to 2050, with an expected part of around 70% of global and ca. 60% of European electricity generation. Coal will remain a key player. Hence, a direct effect on the considered CO2 emissions business-as-usual scenario is expected, forecasting three times the present CO2 concentration values up to 1,200ppm by the end of this century. Kyoto protocol was the first approach to take global responsibility onto CO2 emissions monitoring and cap targets by 2012 with reference to 1990. Some of principal CO2emitters did not ratify the reduction targets. Although USA and China spur are taking its own actions and parallel reduction measures. More efficient combustion processes comprising less fuel consuming, a significant contribution from the electricity generation sector to a CO2 dwindling concentration levels, might not be sufficient. Carbon Capture and Storage (CCS) technologies have started to gain more importance from the beginning of the decade, with research and funds coming out to drive its come in useful. After first researching projects and initial scale testing, three principal capture processes came out available today with first figures showing up to 90% CO2 removal by its standard applications in coal fired power stations. Regarding last part of CO2 reduction chain, two options could be considered worthy, reusing (EOR & EGR) and storage. The study evaluates the state of the CO2 capture technology development, availability and investment cost of the different technologies, with few operation cost analysis possible at the time. Main findings and the abatement potential for coal applications are presented. DOE, NETL, MIT, European universities and research institutions, key technology enterprises and utilities, and key technology suppliers are the main sources of this study. A vision of the technology deployment is presented.
Resumo:
En este proyecto se estudian y analizan las diferentes técnicas de procesado digital de señal aplicadas a acelerómetros. Se hace uso de una tarjeta de prototipado, basada en DSP, para realizar las diferentes pruebas. El proyecto se basa, principalmente, en realizar filtrado digital en señales provenientes de un acelerómetro en concreto, el 1201F, cuyo campo de aplicación es básicamente la automoción. Una vez estudiadas la teoría de procesado y las características de los filtros, diseñamos una aplicación basándonos sobre todo en el entorno en el que se desarrollaría una aplicación de este tipo. A lo largo del diseño, se explican las diferentes fases: diseño por ordenador (Matlab), diseño de los filtros en el DSP (C), pruebas sobre el DSP sin el acelerómetro, calibración del acelerómetro, pruebas finales sobre el acelerómetro... Las herramientas utilizadas son: la plataforma Kit de evaluación 21-161N de Analog Devices (equipado con el entorno de desarrollo Visual DSP 4.5++), el acelerómetro 1201F, el sistema de calibración de acelerómetros CS-18-LF de Spektra y los programas software MATLAB 7.5 y CoolEditPRO 2.0. Se realizan únicamente filtros IIR de 2º orden, de todos los tipos (Butterworth, Chebyshev I y II y Elípticos). Realizamos filtros de banda estrecha, paso-banda y banda eliminada, de varios tipos, dentro del fondo de escala que permite el acelerómetro. Una vez realizadas todas las pruebas, tanto simulaciones como físicas, se seleccionan los filtros que presentan un mejor funcionamiento y se analizan para obtener conclusiones. Como se dispone de un entorno adecuado para ello, se combinan los filtros entre sí de varias maneras, para obtener filtros de mayor orden (estructura paralelo). De esta forma, a partir de filtros paso-banda, podemos obtener otras configuraciones que nos darán mayor flexibilidad. El objetivo de este proyecto no se basa sólo en obtener buenos resultados en el filtrado, sino también de aprovechar las facilidades del entorno y las herramientas de las que disponemos para realizar el diseño más eficiente posible. In this project, we study and analize digital signal processing in order to design an accelerometer-based application. We use a hardware card of evaluation, based on DSP, to make different tests. This project is based in design digital filters for an automotion application. The accelerometer type is 1201F. First, we study digital processing theory and main parameters of real filters, to make a design based on the application environment. Along the application, we comment all the different steps: computer design (Matlab), filter design on the DSP (C language), simulation test on the DSP without the accelerometer, accelerometer calibration, final tests on the accelerometer... Hardware and software tools used are: Kit of Evaluation 21-161-N, based on DSP, of Analog Devices (equiped with software development tool Visual DSP 4.5++), 1201-F accelerometer, CS-18-LF calibration system of SPEKTRA and software tools MATLAB 7.5 and CoolEditPRO 2.0. We only perform 2nd orden IIR filters, all-type : Butterworth, Chebyshev I and II and Ellyptics. We perform bandpass and stopband filters, with very narrow band, taking advantage of the accelerometer's full scale. Once all the evidence, both simulations and physical, are finished, filters having better performance and analyzed and selected to draw conclusions. As there is a suitable environment for it, the filters are combined together in different ways to obtain higher order filters (parallel structure). Thus, from band-pass filters, we can obtain many configurations that will give us greater flexibility. The purpose of this project is not only based on good results in filtering, but also to exploit the facilities of the environment and the available tools to make the most efficient design possible.
Resumo:
We present experimental and numerical results on intense-laser-pulse-produced fast electron beams transport through aluminum samples, either solid or compressed and heated by laser-induced planar shock propagation. Thanks to absolute K� yield measurements and its very good agreement with results from numerical simulations, we quantify the collisional and resistive fast electron stopping powers: for electron current densities of � 8 � 1010 A=cm2 they reach 1:5 keV=�m and 0:8 keV=�m, respectively. For higher current densities up to 1012 A=cm2, numerical simulations show resistive and collisional energy losses at comparable levels. Analytical estimations predict the resistive stopping power will be kept on the level of 1 keV=�m for electron current densities of 1014 A=cm2, representative of the full-scale conditions in the fast ignition of inertially confined fusion targets.
Resumo:
Underground coal mines explosions generally arise from the inflammation of a methane/air mixture. This explosion can also generate a subsequent coal dust explosion. Traditionally such explosions have being fought eliminating one or several of the factors needed by the explosion to take place. Although several preventive measures are taken to prevent explosions, other measures should be considered to reduce the effects or even to extinguish the flame front. Unlike other protection methods that remove one or two of the explosion triangle elements, namely; the ignition source, the oxidizing agent and the fuel, explosion barriers removes all of them: reduces the quantity of coal in suspension, cools the flame front and the steam generated by vaporization removes the oxygen present in the flame. The present paper is essentially based on the comprehensive state-of–the-art of Protective Systems in underground coal mines, and particularly on the application of Explosion Barriers to improve safety level in Spanish coal mining industry. After an exhaustive study of series EN 14591 standards covering explosion prevention and protection in underground mines, authors have proven explosion barriers effectiveness in underground galleries by Full Scale Tests performed in Polish Barbara experimental mine, showing that the barriers can reduce the effects of methane and/or flammable coal dust explosions to a satisfactory safety level.
Resumo:
Experimental research on imposed deformation is generally conducted on small scale laboratory experiments. The attractiveness of field research lies in the possibility to compare results obtained from full scale structures to theoretical prediction. Unfortunately, measurements obtained from real structures are rarely described in literature. The structural response of integral edifices depends significantly on stiffness changes and constraints. The New Airport Terminal Barajas in Madrid, Spain provides with large integral modules, partially post?tensioned concrete frames, cast monolithically over three floor levels and an overall length of approx. 80 m. The field campaign described in this article explains the instrumentation of one of these frames focusing on the influence of imposed deformations such as creep, shrinkage and temperature. The applied monitoring equipment included embedded strain gages, thermocouples, DEMEC measurements and simple displacement measurements. Data was collected throughout construction and during two years of service. A complete data range of five years is presented and analysed. The results are compared with a simple approach to predict the long?term shortening of this concrete structure. Both analytical and experimental results are discussed.
Resumo:
The 6 cylinder servo-hydraulic loading system of CEDEX's track box (250 kN, 50 Hz) has been recently implemented with a new piezoelectric loading system (±20 kN, 300 Hz) allowing the incorporation of low amplitude high frequency dynamic load time histories to the high amplitude low frequency quasi-static load time histories used so far in the CEDEX's track box to assess the inelastic long term behavior of ballast under mixed traffic in conventional and high- speed lines. This presentation will discuss the results obtained in the first long-duration test performed at CEDEX's track box using simultaneously both loading systems, to simulate the pass-by of 6000 freight vehicles (1M of 225 kN axle loads) travelling at a speed of 120 km/h over a line with vertical irregularities corresponding to a medium quality lin3e level. The superstructure of the track tested at full scale consisted of E 60 rails, stiff rail pads (mayor que 450 kN/mm), B90.2 sleepers with USP 0.10 N/mm and a 0.35 m thick ballast layer of ADIF first class. A shear wave velocity of 250 m/s can be assumed for the different layers of the track sub-base. The ballast long-term settlements will be compared with those obtained in a previous long-duration quasi- static test performed in the same track, for the RIVAS [EU co-funded] project, in which no dynamic loads where considered. Also, the results provided by a high diameter cyclic triaxial cell with ballast tested in full size will be commented. Finally, the progress made at CEDEX's Geotechnical Laboratory to reproduce numerically the long term behavior of ballast will be discussed.
Resumo:
If reinforced concrete structures are to be safe under extreme impulsive loadings such as explosions, a broad understanding of the fracture mechanics of concrete under such events is needed. Most buildings and infrastructures which are likely to be subjected to terrorist attacks are borne by a reinforced concrete (RC) structure. Up to some years ago, the traditional method used to study the ability of RC structures to withstand explosions consisted on a choice between handmade calculations, affordable but inaccurate and unreliable, and full scale experimental tests involving explosions, expensive and not available for many civil institutions. In this context, during the last years numerical simulations have arisen as the most effective method to analyze structures under such events. However, for accurate numerical simulations, reliable constitutive models are needed. Assuming that failure of concrete elements subjected to blast is primarily governed by the tensile behavior, a constitutive model has been built that accounts only for failure under tension while it behaves as elastic without failure under compression. Failure under tension is based on the Cohesive Crack Model. Moreover, the constitutive model has been used to simulate the experimental structural response of reinforced concrete slabs subjected to blast. The results of the numerical simulations with the aforementioned constitutive model show its ability of representing accurately the structural response of the RC elements under study. The simplicity of the model, which does not account for failure under compression, as already mentioned, confirms that the ability of reinforced concrete structures to withstand blast loads is primarily governed by tensile strength.
Resumo:
The Bolund experiment has been reproduced in a neutral boundary layer wind tunnel (WT) at scale 1:115 for two Reynolds numbers. All the results have been obtained for an incoming flow from the 270o wind direction (transect B in the Bolund experiment jargon). Vertical scans of the velocity field are obtained using non-time resolved two components particle image velocimetry. Time-resolved velocity time series with a three component hot-wire probe have been also measured for transects at 2 and 5 m height and in the vertical transects at met masts M6, M3 and M8 locations. Special attention has been devoted to the detailed characterization of the inflow in order to reduce uncertainties in future comparisons with other physical and numerical simulations. Emphasis is placed on the analysis of spectral functions of the undisturbed flow and those of the flow above the island. The result?s reproducibility and trustworthiness have been addressed through redundancy measurements using particle image velocimetry, two and three components hot-wire anemometry. The bias in the prediction of the mean speed is similar to the one reported during the Bolund experiment by the physical modellers. However, certain reduction of the bias in the estimation of the turbulent kinetic energy is achieved. TheWT results of spectra and cosprectra have revealed a behaviour similar to the full-scale measurements in some relevant locations, showing that WT modelling can contribute to provide valid information about these important structural loading factors.
Resumo:
Se presenta en este trabajo una investigación sobre el comportamiento de losas de hormigón armado sometidas a explosiones y la simulación numérica de dicho fenómeno mediante el método de los elementos finitos. El trabajo aborda el estudio de la respuesta de dichos elementos estructurales por comparación entre los resultados obtenidos en ensayos reales a escala 1:1 y los cálculos realizados mediante modelos de ordenador. Este procedimiento permite verificar la idoneidad, o no, de estos últimos. Se expone en primer lugar el comportamiento mecánico de los modelos de material que son susceptibles de emplearse en la simulación de estructuras mediante el software empleado en la presente investigación, así como las diferentes formas de aplicar cargas explosivas en estructuras modeladas mediante el método de los Elementos Finitos, razonándose en ambos casos la elección llevada a cabo. Posteriormente, se describen los ensayos experimentales disponibles, que tuvieron lugar en las instalaciones del Laboratorio de Balística de Efectos, perteneciente al Instituto Tecnológico de la Marañosa (ITM), de Madrid, para estudiar el comportamiento de losas de hormigón armado a escala 1:1 sometidas a explosiones reales. Se ha propuesto un método de interpretación del nivel de daño en las losas mediante el martillo de Schmidt, que posteriormente permitirá comparar resultados con los modelos de ordenador realizados. Asimismo, se propone un método analítico para la determinación del tamaño óptimo de la malla en las simulaciones realizadas, basado en la distribución de la energía interna del sistema. Es conocido que el comportamiento de los modelos pueden verse fuertemente influenciados por el mallado empleado. Según el mallado sea “grosero” o “fino” el fallo puede no alcanzarse o hacerlo de forma prematura, o excesiva, respectivamente. Es más, algunos modelos de material contemplan una “regularización” del tamaño de la malla, pero en la presente investigación se evidencia que dicho procedimiento tiene un rango de validez limitado, incluso se determina un entorno óptimo de valores. Finalmente, se han elaborado los modelos numéricos con el software comercial LS-DYNA, contemplando todos los aspectos reseñados en los párrafos anteriores, procediendo a realizar una comparación de los resultados obtenidos en las simulaciones con los procedidos en los ensayos reales a escala 1:1, observando que existe una muy buena correlación entre ambas situaciones que evidencian que el procedimiento propuesto en la investigación es de todo punto adecuado para la simulación de losas de hormigón armado sometidas a explosiones. ABSTRACT This doctoral thesis presents an investigation on the behavior of reinforced concrete slabs subjected to explosions along with the numerical simulation of this phenomenon by the finite elements method. The work involves the study of the response of these structural elements by comparing the results of field tests at full scale and the calculations performed by the computer model. This procedure allows to verify the appropriateness or not of the latter. Firstly, the mechanical behavior of the material models that are likely to be used in the modelling of structures is explained. In addition, different ways of choosing explosive charges when conducting finite element methods are analyzed and discussed. Secondly, several experimental tests, which took place at the Laboratorio de Balística de Efectos at the Instituto Tecnológico de la Marañosa (ITM), in Madrid, are described in order to study the behavior of these reinforced concrete slabs. A method for the description of the slab damage level by the Schmidt hammer is proposed, which will make possible to compare the modelling results extracted from the computation experiments. Furthermore, an analytical method for determining the optimal mesh size to be used in the simulations is proposed. It is well known that the behavior of the models can be strongly influenced by the mesh size used. According to this, when modifiying the meshing density the damaged cannot be reached or do it prematurely, or excessive, respectively. Moreover, some material models include a regularization of the mesh size, but the present investigation evidenced that this procedure has a limited range of validity, even an optimal environment values are determined. The method proposed is based on the distribution of the internal energy of the system. Finally, several expecific numerical models have been performed by using LS-DYNA commercial software, considering all the aspects listed in the preceding paragraphs. Comparisons of the results extracted from the simulations and full scale experiments were carried out, noting that there exists a very good correlation between both of them. This fact demonstrates that the proposed research procedure is highly suitable for the modelling of reinforced concrete slabs subjected to blast loading.
Resumo:
We have analyzed the influence of the actual height of Bolund island above water level on different full-scale statistics of the velocity field over the peninsula. Our analysis is focused on the database of 10-minute statistics provided by Risø-DTU for the Bolund Blind Experiment. We have considered 10-minut.e periods with near-neutral atmospheric conditions, mean wind speed values in the interval [5,20] m/s, and westerly wind directions. As expected, statistics such as speed-up, normalized increase of turbulent kinetic energy and probability of recirculating flow show a large dependence on the emerged height of the island for the locations close to the escarpment. For the published ensemble mean values of speed-up and normalized increase of turbulent kinetic energy in these locations, we propose that some ammount of uncertainty could be explained as a deterministic dependence of the flow field statistics upon the actual height of the Bolund island above the sea level
Resumo:
Passengers comfort in terms of acoustic noise levels is a key train design parameter, especially relevant in high speed trains, where the aerodynamic noise is dominant. The aim of the work, described in this paper, is to make progress in the understanding of the flow field around high speed trains in an open field, which is a subject of interest for many researchers with direct industrial applications, but also the critical configuration of the train inside a tunnel is studied in order to evaluate the external loads arising from noise sources of the train. The airborne noise coming from the wheels (wheelrail interaction), which is the dominant source at a certain range of frequencies, is also investigated from the numerical and experimental points of view. The numerical prediction of the noise in the interior of the train is a very complex problem, involving many different parameters: complex geometries and materials, different noise sources, complex interactions among those sources, broad range of frequencies where the phenomenon is important, etc. During recent years a research plan is being developed at IDR/UPM (Instituto de Microgravedad Ignacio Da Riva, Universidad Politécnica de Madrid) involving both numerical simulations, wind tunnel and full-scale tests to address this problem. Comparison of numerical simulations with experimental data is a key factor in this process.
Resumo:
El rebase se define como el transporte de una cantidad importante de agua sobre la coronación de una estructura. Por tanto, es el fenómeno que, en general, determina la cota de coronación del dique dependiendo de la cantidad aceptable del mismo, a la vista de condicionantes funcionales y estructurales del dique. En general, la cantidad de rebase que puede tolerar un dique de abrigo desde el punto de vista de su integridad estructural es muy superior a la cantidad permisible desde el punto de vista de su funcionalidad. Por otro lado, el diseño de un dique con una probabilidad de rebase demasiado baja o nula conduciría a diseños incompatibles con consideraciones de otro tipo, como son las estéticas o las económicas. Existen distintas formas de estudiar el rebase producido por el oleaje sobre los espaldones de las obras marítimas. Las más habituales son los ensayos en modelo físico y las formulaciones empíricas o semi-empíricas. Las menos habituales son la instrumentación en prototipo, las redes neuronales y los modelos numéricos. Los ensayos en modelo físico son la herramienta más precisa y fiable para el estudio específico de cada caso, debido a la complejidad del proceso de rebase, con multitud de fenómenos físicos y parámetros involucrados. Los modelos físicos permiten conocer el comportamiento hidráulico y estructural del dique, identificando posibles fallos en el proyecto antes de su ejecución, evaluando diversas alternativas y todo esto con el consiguiente ahorro en costes de construcción mediante la aportación de mejoras al diseño inicial de la estructura. Sin embargo, presentan algunos inconvenientes derivados de los márgenes de error asociados a los ”efectos de escala y de modelo”. Las formulaciones empíricas o semi-empíricas presentan el inconveniente de que su uso está limitado por la aplicabilidad de las fórmulas, ya que éstas sólo son válidas para una casuística de condiciones ambientales y tipologías estructurales limitadas al rango de lo reproducido en los ensayos. El objetivo de la presente Tesis Doctoral es el contrate de las formulaciones desarrolladas por diferentes autores en materia de rebase en distintas tipologías de diques de abrigo. Para ello, se ha realizado en primer lugar la recopilación y el análisis de las formulaciones existentes para estimar la tasa de rebase sobre diques en talud y verticales. Posteriormente, se llevó a cabo el contraste de dichas formulaciones con los resultados obtenidos en una serie de ensayos realizados en el Centro de Estudios de Puertos y Costas. Para finalizar, se aplicó a los ensayos de diques en talud seleccionados la herramienta neuronal NN-OVERTOPPING2, desarrollada en el proyecto europeo de rebases CLASH (“Crest Level Assessment of Coastal Structures by Full Scale Monitoring, Neural Network Prediction and Hazard Analysis on Permissible Wave Overtopping”), contrastando de este modo la tasa de rebase obtenida en los ensayos con este otro método basado en la teoría de las redes neuronales. Posteriormente, se analizó la influencia del viento en el rebase. Para ello se han realizado una serie de ensayos en modelo físico a escala reducida, generando oleaje con y sin viento, sobre la sección vertical del Dique de Levante de Málaga. Finalmente, se presenta el análisis crítico del contraste de cada una de las formulaciones aplicadas a los ensayos seleccionados, que conduce a las conclusiones obtenidas en la presente Tesis Doctoral. Overtopping is defined as the volume of water surpassing the crest of a breakwater and reaching the sheltered area. This phenomenon determines the breakwater’s crest level, depending on the volume of water admissible at the rear because of the sheltered area’s functional and structural conditioning factors. The ways to assess overtopping processes range from those deemed to be most traditional, such as semi-empirical or empirical type equations and physical, reduced scale model tests, to others less usual such as the instrumentation of actual breakwaters (prototypes), artificial neural networks and numerical models. Determining overtopping in reduced scale physical model tests is simple but the values obtained are affected to a greater or lesser degree by the effects of a scale model-prototype such that it can only be considered as an approximation to what actually happens. Nevertheless, physical models are considered to be highly useful for estimating damage that may occur in the area sheltered by the breakwater. Therefore, although physical models present certain problems fundamentally deriving from scale effects, they are still the most accurate, reliable tool for the specific study of each case, especially when large sized models are adopted and wind is generated Empirical expressions obtained from laboratory tests have been developed for calculating the overtopping rate and, therefore, the formulas obtained obviously depend not only on environmental conditions – wave height, wave period and water level – but also on the model’s characteristics and are only applicable in a range of validity of the tests performed in each case. The purpose of this Thesis is to make a comparative analysis of methods for calculating overtopping rates developed by different authors for harbour breakwater overtopping. First, existing equations were compiled and analysed in order to estimate the overtopping rate on sloping and vertical breakwaters. These equations were then compared with the results obtained in a number of tests performed in the Centre for Port and Coastal Studies of the CEDEX. In addition, a neural network model developed in the European CLASH Project (“Crest Level Assessment of Coastal Structures by Full Scale Monitoring, Neural Network Prediction and Hazard Analysis on Permissible Wave Overtopping“) was also tested. Finally, the wind effects on overtopping are evaluated using tests performed with and without wind in the physical model of the Levante Breakwater (Málaga).
Resumo:
We have analyzed the influence of the actual height of Bolund island above water level on different full-scale statistics of the velocity field over the peninsula. Our analysis is focused on the database of 10-minute statistics provided by Risø-DTU for the Bolund Blind Experiment. We have considered 10-minut.e periods with near-neutral atmospheric conditions, mean wind speed values in the interval [5,20] m/s, and westerly wind directions. As expected, statistics such as speed-up, normalized increase of turbulent kinetic energy and probability of recirculating flow show a large dependence on the emerged height of the island for the locations close to the escarpment. For the published ensemble mean values of speed-up and normalized increase of turbulent kinetic energy in these locations, we propose that some ammount of uncertainty could be explained as a deterministic dependence of the flow field statistics upon the actual height of the Bolund island above the sea level.
Resumo:
The use of vegetal systems in facades affects the reduction of the buildings' energy demand, the attenuation of the urban heat island (UHI) and the filtration of pollutants present in the air. Even so, up to now the knowledge about the effect of this type of systems on the thermal performance of insulated facades is limited. This article presents the results of an experimental study carried out in a vegetal facade located in a continental Mediterranean climate zone. The objective is to study the effect of a vegetal finishing, formed by plants and substrate, on the thermal-energy performance of an insulated facade under summer conditions. To this effect, the thermal data obtained from two full-scale experimental mock-ups of the same dimensions and composition of the enclosure and only different in the south facade's enclosure where one incorporates a vegetation layer are compared and analysed. The results show that, in spite of the high thermal resistance of the enclosure, the effect of the vegetation is very positive, particularly in the warmer hours of the day. Therefore, vegetal facades can be used as a passive cooling strategy, reducing the consumption of energy for refrigeration and improving the comfort conditions of the users. (C) 2014 Elsevier Ltd. All rights reserved.