857 resultados para High-performance concrete (HPC)


Relevância:

100.00% 100.00%

Publicador:

Resumo:

The recent technological advancements and market trends are causing an interesting phenomenon towards the convergence of High-Performance Computing (HPC) and Embedded Computing (EC) domains. On one side, new kinds of HPC applications are being required by markets needing huge amounts of information to be processed within a bounded amount of time. On the other side, EC systems are increasingly concerned with providing higher performance in real-time, challenging the performance capabilities of current architectures. The advent of next-generation many-core embedded platforms has the chance of intercepting this converging need for predictable high-performance, allowing HPC and EC applications to be executed on efficient and powerful heterogeneous architectures integrating general-purpose processors with many-core computing fabrics. To this end, it is of paramount importance to develop new techniques for exploiting the massively parallel computation capabilities of such platforms in a predictable way. P-SOCRATES will tackle this important challenge by merging leading research groups from the HPC and EC communities. The time-criticality and parallelisation challenges common to both areas will be addressed by proposing an integrated framework for executing workload-intensive applications with real-time requirements on top of next-generation commercial-off-the-shelf (COTS) platforms based on many-core accelerated architectures. The project will investigate new HPC techniques that fulfil real-time requirements. The main sources of indeterminism will be identified, proposing efficient mapping and scheduling algorithms, along with the associated timing and schedulability analysis, to guarantee the real-time and performance requirements of the applications.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Ultrahochfester Beton (UHPC) ist ein sehr gefügedichter zementgebundener Werkstoff, der sich nicht nur durch eine hohe Druckfestigkeit, sondern auch durch einen hohen Widerstand gegen jede Form physikalischen oder chemischen Angriffs auszeichnet. Duktiles Nachbruchverhalten bei Druckversagen wird meist durch die Zugabe dünner kurzer Fasern erreicht. In Kombination mit konventioneller Betonstahl- oder Spannbewehrung ermöglicht UHPC die Ausführung sehr schlanker, weitgespannter Konstruktionen und eröffnet zugleich neue Anwendungsgebiete, wie zum Beispiel die flächenhafte Beschichtung von Brückendecks. Durch das Zusammenwirken kontinuierlicher Bewehrungselemente und diskontinuierlich verteilter kurzer Fasern ergeben sich unter Zugbeanspruchung Unterschiede gegenüber dem bekannten Stahl- und Spannbeton. In der vorliegenden Arbeit wird hierzu ein Modell entwickelt und durch eine umfangreiche Versuchsreihe abgesichert. Ausgangspunkt sind experimentelle und theoretische Untersuchungen zum Verbundverhalten von Stabstählen in einer UHPC-Matrix und zum Einfluss einer Faserzugabe auf das Reiß- und Zugtragverhalten von UHPC. Die Modellbildung für UHPC-Zugelemente mit gemischter Bewehrung aus Stabstahl und Fasern erfolgt auf der Grundlage der Vorgänge am diskreten Riss, die daher sehr ausführlich behandelt werden. Für den elastischen Verformungsbereich der Stabbewehrung (Gebrauchslastbereich) kann damit das Last-Verformungs-Verhalten für kombiniert bewehrte Bauteile mechanisch konsistent unter Berücksichtigung des bei UHPC bedeutsamen hohen Schwindmaßes abgebildet werden. Für die praktische Anwendung wird durch Vereinfachungen ein Näherungsverfahren abgeleitet. Sowohl die theoretischen als auch die experimentellen Untersuchungen bestätigen, dass der faserbewehrte UHPC bei Kombination mit kontinuierlichen Bewehrungselementen selbst kein verfestigendes Verhalten aufweisen muss, um insgesamt verfestigendes Verhalten und damit eine verteilte Rissbildung mit sehr keinen Rissbreiten und Rissabständen zu erzielen. Diese Beobachtungen können mit Hilfe der bisher zur Verfügung stehenden Modelle, die im Wesentlichen eine Superposition isoliert ermittelter Spannungs-Dehnungs-Beziehungen des Faserbetons und des reinen Stahls vorsehen, nicht nachvollzogen werden. Wie die eigenen Untersuchungen zeigen, kann durch ausreichend dimensionierte Stabstahlbewehrung zielgerichtet und ohne unwirtschaftlich hohe Fasergehalte ein gutmütiges Verhalten von UHPC auf Zug erreicht werden. Die sichere Begrenzung der Rissbreiten auf deutlich unter 0,1 mm gewährleistet zugleich die Dauerhaftigkeit auch bei ungünstigen Umgebungsbedingungen. Durch die Minimierung des Material- und Energieeinsatzes und die zu erwartende lange Nutzungsdauer lassen sich so im Sinne der Nachhaltigkeit optimierte Bauteile realisieren.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Ultrahochfester Beton besitzt aufgrund seiner Zusammensetzung eine sehr hohe Druckfestigkeit von 150 bis über 200 N/mm² und eine außergewöhnlich hohe Dichtigkeit. Damit werden Anwendungen in stark belasteten Bereichen und mit hohen Anforderungen an die Dauerhaftigkeit des Materials ermöglicht. Gleichzeitig zeigt ultrahochfester Beton bei Erreichen seiner Festigkeit ein sehr sprödes Verhalten. Zur Verhinderung eines explosionsartigen Versagens werden einer UHPC-Mischung Fasern zugegeben oder wird eine Umschnürung mit Stahlrohren ausgebildet. Die Zugabe von Fasern zur Betonmatrix beeinflusst neben der Verformungsfähigkeit auch die Tragfähigkeit des UHPC. Das Versagen der Fasern ist abhängig von Fasergeometrie, Fasergehalt, Verbundverhalten sowie Zugfestigkeit der Faser und gekennzeichnet durch Faserauszug oder Faserreißen. Zur Sicherstellung der Tragfähigkeit kann daher auf konventionelle Bewehrung außer bei sehr dünnen Bauteilen nicht verzichtet werden. Im Rahmen des Schwerpunktprogramms SPP 1182 der Deutschen Forschungsgemeinschaft (DFG) wurden in dem dieser Arbeit zugrunde liegenden Forschungsprojekt die Fragen nach der Beschreibung des Querkrafttragverhaltens von UHPC-Bauteilen mit kombinierter Querkraftbewehrung und der Übertragbarkeit bestehender Querkraftmodelle auf UHPC untersucht. Neben einer umfassenden Darstellung vorhandener Querkraftmodelle für Stahlbetonbauteile ohne Querkraftbewehrung und mit verschiedenen Querkraftbewehrungsarten bilden experimentelle Untersuchungen zum Querkrafttragverhalten an UHPC-Balken mit verschiedener Querkraftbewehrung den Ausgangspunkt der vorliegenden Arbeit. Die experimentellen Untersuchungen beinhalteten zehn Querkraftversuche an UHPC-Balken. Diese Balken waren in Abmessungen und Biegezugbewehrung identisch. Sie unterschieden sich nur in der Art der Querkraftbewehrung. Die Querkraftbewehrungsarten umfassten eine Querkraftbewehrung aus Stahlfasern oder Vertikalstäben, eine kombinierte Querkraftbewehrung aus Stahlfasern und Vertikalstäben und einen Balken ohne Querkraftbewehrung. Obwohl für die in diesem Projekt untersuchten Balken Fasergehalte gewählt wurden, die zu einem entfestigenden Nachrissverhalten des Faserbetons führten, zeigten die Balkenversuche, dass die Zugabe von Stahlfasern die Querkrafttragfähigkeit steigerte. Durch die gewählte Querkraftbewehrungskonfiguration bei ansonsten identischen Balken konnte außerdem eine quantitative Abschätzung der einzelnen Traganteile aus den Versuchen abgeleitet werden. Der profilierte Querschnitt ließ einen großen Einfluss auf das Querkrafttragverhalten im Nachbruchbereich erkennen. Ein relativ stabiles Lastniveau nach Erreichen der Höchstlast konnte einer Vierendeelwirkung zugeordnet werden. Auf Basis dieser Versuchsergebnisse und analytischer Überlegungen zu vorhandenen Querkraftmodellen wurde ein additiver Modellansatz zur Beschreibung des Querkrafttragverhaltens von UHPCBalken mit einer kombinierten Querkraftbewehrung aus Stahlfasern und Vertikalstäben formuliert. Für die Formulierung der Traganteile des Betonquerschnitts und der konventionellen Querkraftbewehrung wurden bekannte Ansätze verwendet. Für die Ermittlung des Fasertraganteils wurde die Faserwirksamkeit zugrunde gelegt. Das Lastniveau im Nachbruchbereich aus Viendeelwirkung ergibt sich aus geometrischen Überlegungen.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Since the Seventies, there has been a growing degradation of concrete structures in Brazil. For that reason, much research has been made on the durability of those structures, aiming at contributing towards quality improvement and reduction of maintenance and repair costs. This study evaluates the behavior of the durability of high-performance concrete with additions, replacing part of the cement and aggregates with rice husk ash and tire rubber, respectively. Durability tests were carried out in which concrete was subjected to several degradation processes, such as the action of water, temperature, salts and acid solution. The results indicated that the addition of active silica or rice husk ash, both with tire rubber did not worsen the durability of concrete. In fact, rubber proved to be very effective in preventing the action of chemical agents, high temperatures and the penetration of water. Rice husk ash, despite the larger diameter of particles, had similar results to that of the active silica.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Pós-graduação em Engenharia Civil - FEIS

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This thesis explores the capabilities of heterogeneous multi-core systems, based on multiple Graphics Processing Units (GPUs) in a standard desktop framework. Multi-GPU accelerated desk side computers are an appealing alternative to other high performance computing (HPC) systems: being composed of commodity hardware components fabricated in large quantities, their price-performance ratio is unparalleled in the world of high performance computing. Essentially bringing “supercomputing to the masses”, this opens up new possibilities for application fields where investing in HPC resources had been considered unfeasible before. One of these is the field of bioelectrical imaging, a class of medical imaging technologies that occupy a low-cost niche next to million-dollar systems like functional Magnetic Resonance Imaging (fMRI). In the scope of this work, several computational challenges encountered in bioelectrical imaging are tackled with this new kind of computing resource, striving to help these methods approach their true potential. Specifically, the following main contributions were made: Firstly, a novel dual-GPU implementation of parallel triangular matrix inversion (TMI) is presented, addressing an crucial kernel in computation of multi-mesh head models of encephalographic (EEG) source localization. This includes not only a highly efficient implementation of the routine itself achieving excellent speedups versus an optimized CPU implementation, but also a novel GPU-friendly compressed storage scheme for triangular matrices. Secondly, a scalable multi-GPU solver for non-hermitian linear systems was implemented. It is integrated into a simulation environment for electrical impedance tomography (EIT) that requires frequent solution of complex systems with millions of unknowns, a task that this solution can perform within seconds. In terms of computational throughput, it outperforms not only an highly optimized multi-CPU reference, but related GPU-based work as well. Finally, a GPU-accelerated graphical EEG real-time source localization software was implemented. Thanks to acceleration, it can meet real-time requirements in unpreceeded anatomical detail running more complex localization algorithms. Additionally, a novel implementation to extract anatomical priors from static Magnetic Resonance (MR) scansions has been included.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Applications that operate on meshes are very popular in High Performance Computing (HPC) environments. In the past, many techniques have been developed in order to optimize the memory accesses for these datasets. Different loop transformations and domain decompositions are com- monly used for structured meshes. However, unstructured grids are more challenging. The memory accesses, based on the mesh connectivity, do not map well to the usual lin- ear memory model. This work presents a method to improve the memory performance which is suitable for HPC codes that operate on meshes. We develop a method to adjust the sequence in which the data are used inside the algorithm, by means of traversing and sorting the mesh. This sorted mesh can be transferred sequentially to the lower memory levels and allows for minimum data transfer requirements. The method also reduces the lower memory requirements dra- matically: up to 63% of the L1 cache misses are removed in a traditional cache system. We have obtained speedups of up to 2.58 on memory operations as measured in a general- purpose CPU. An improvement is also observed with se- quential access memories, where we have observed reduc- tions of up to 99% in the required low-level memory size.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

El hormigón autocompactante (HAC) es una nueva tipología de hormigón o material compuesto base cemento que se caracteriza por ser capaz de fluir en el interior del encofrado o molde, llenándolo de forma natural, pasando entre las barras de armadura y consolidándose únicamente bajo la acción de su peso propio, sin ayuda de medios de compactación externos, y sin que se produzca segregación de sus componentes. Debido a sus propiedades frescas (capacidad de relleno, capacidad de paso, y resistencia a la segregación), el HAC contribuye de forma significativa a mejorar la calidad de las estructuras así como a abrir nuevos campos de aplicación del hormigón. Por otra parte, la utilidad del hormigón reforzado con fibras de acero (HRFA) es hoy en día incuestionable debido a la mejora significativa de sus propiedades mecánicas tales como resistencia a tracción, tenacidad, resistencia al impacto o su capacidad para absorber energía. Comparado con el HRFA, el hormigón autocompactante reforzado con fibras de acero (HACRFA) presenta como ventaja una mayor fluidez y cohesión ofreciendo, además de unas buenas propiedades mecánicas, importantes ventajas en relación con su puesta en obra. El objetivo global de esta tesis doctoral es el desarrollo de nuevas soluciones estructurales utilizando materiales compuestos base cemento autocompactantes reforzados con fibras de acero. La tesis presenta una nueva forma de resolver el problema basándose en el concepto de los materiales gradiente funcionales (MGF) o materiales con función gradiente (MFG) con el fin de distribuir de forma eficiente las fibras en la sección estructural. Para ello, parte del HAC se sustituye por HACRFA formando capas que presentan una transición gradual entre las mismas con el fin de obtener secciones robustas y exentas de tensiones entre capas con el fin de aplicar el concepto “MGF-laminados” a elementos estructurales tales como vigas, columnas, losas, etc. El proceso incluye asimismo el propio método de fabricación que, basado en la tecnología HAC, permite el desarrollo de interfases delgadas y robustas entre capas (1-3 mm) gracias a las propiedades reológicas del material. Para alcanzar dichos objetivos se ha llevado a cabo un amplio programa experimental cuyas etapas principales son las siguientes: • Definir y desarrollar un método de diseño que permita caracterizar de forma adecuada las propiedades mecánicas de la “interfase”. Esta primera fase experimental incluye: o las consideraciones generales del propio método de fabricación basado en el concepto de fabricación de materiales gradiente funcionales denominado “reología y gravedad”, o las consideraciones específicas del método de caracterización, o la caracterización de la “interfase”. • Estudiar el comportamiento mecánico sobre elementos estructurales, utilizando distintas configuraciones de MGF-laminado frente a acciones tanto estáticas como dinámicas con el fin de comprobar la viabilidad del material para ser usado en elementos estructurales tales como vigas, placas, pilares, etc. Los resultados indican la viabilidad de la metodología de fabricación adoptada, así como, las ventajas tanto estructurales como en reducción de costes de las soluciones laminadas propuestas. Es importante destacar la mejora en términos de resistencia a flexión, compresión o impacto del hormigón autocompactante gradiente funcional en comparación con soluciones de HACRFA monolíticos inclusos con un volumen neto de fibras (Vf) doble o superior. Self-compacting concrete (SCC) is an important advance in the concrete technology in the last decades. It is a new type of high performance concrete with the ability of flowing under its own weight and without the need of vibrations. Due to its specific fresh or rheological properties, such as filling ability, passing ability and segregation resistance, SCC may contribute to a significant improvement of the quality of concrete structures and open up new field for the application of concrete. On the other hand, the usefulness of steel fibre-reinforced concrete (SFRC) in civil engineering applications is unquestionable. SFRC can improve significantly the hardened mechanical properties such as tensile strength, impact resistance, toughness and energy absorption capacity. Compared to SFRC, self-compacting steel fibre-reinforced concrete (SCSFRC) is a relatively new type of concrete with high flowability and good cohesiveness. SCSFRC offers very attractive economical and technical benefits thanks to SCC rheological properties, which can be further extended, when combined with SFRC for improving their mechanical characteristics. However, for the different concrete structural elements, a single concrete mix is selected without an attempt to adapt the diverse fibre-reinforced concretes to the stress-strain sectional properly. This thesis focused on the development of high performance cement-based structural composites made of SCC with and without steel fibres, and their applications for enhanced mechanical properties in front of different types of load and pattern configurations. It presents a new direction for tackling the mechanical problem. The approach adopted is based on the concept of functionally graded cementitious composite (FGCC) where part of the plain SCC is strategically replaced by SCSFRC in order to obtain laminated functionally graded self-compacting cementitious composites, laminated-FGSCC, in single structural elements as beams, columns, slabs, etc. The approach also involves a most suitable casting method, which uses SCC technology to eliminate the potential sharp interlayer while easily forming a robust and regular reproducible graded interlayer of 1-3 mm by controlling the rheology of the mixes and using gravity at the same time to encourage the use of the powerful concept for designing more performance suitable and cost-efficient structural systems. To reach the challenging aim, a wide experimental programme has been carried out involving two main steps: • The definition and development of a novel methodology designed for the characterization of the main parameter associated to the interface- or laminated-FGSCC solutions: the graded interlayer. Work of this first part includes: o the design considerations of the innovative (in the field of concrete) production method based on “rheology and gravity” for producing FG-SCSFRC or as named in the thesis FGSCC, casting process and elements, o the design of a specific testing methodology, o the characterization of the interface-FGSCC by using the so designed testing methodology. • The characterization of the different medium size FGSCC samples under different static and dynamic loads patterns for exploring their possibilities to be used for structural elements as beams, columns, slabs, etc. The results revealed the efficiency of the manufacturing methodology, which allow creating robust structural sections, as well as the feasibility and cost effectiveness of the proposed FGSCC solutions for different structural uses. It is noticeable to say the improvement in terms of flexural, compressive or impact loads’ responses of the different FGSCC in front of equal strength class SCSFRC bulk elements with at least the double of overall net fibre volume fraction (Vf).

Relevância:

100.00% 100.00%

Publicador:

Resumo:

LLas nuevas tecnologías orientadas a la nube, el internet de las cosas o las tendencias "as a service" se basan en el almacenamiento y procesamiento de datos en servidores remotos. Para garantizar la seguridad en la comunicación de dichos datos al servidor remoto, y en el manejo de los mismos en dicho servidor, se hace uso de diferentes esquemas criptográficos. Tradicionalmente, dichos sistemas criptográficos se centran en encriptar los datos mientras no sea necesario procesarlos (es decir, durante la comunicación y almacenamiento de los mismos). Sin embargo, una vez es necesario procesar dichos datos encriptados (en el servidor remoto), es necesario desencriptarlos, momento en el cual un intruso en dicho servidor podría a acceder a datos sensibles de usuarios del mismo. Es más, este enfoque tradicional necesita que el servidor sea capaz de desencriptar dichos datos, teniendo que confiar en la integridad de dicho servidor de no comprometer los datos. Como posible solución a estos problemas, surgen los esquemas de encriptación homomórficos completos. Un esquema homomórfico completo no requiere desencriptar los datos para operar con ellos, sino que es capaz de realizar las operaciones sobre los datos encriptados, manteniendo un homomorfismo entre el mensaje cifrado y el mensaje plano. De esta manera, cualquier intruso en el sistema no podría robar más que textos cifrados, siendo imposible un robo de los datos sensibles sin un robo de las claves de cifrado. Sin embargo, los esquemas de encriptación homomórfica son, actualmente, drás-ticamente lentos comparados con otros esquemas de encriptación clásicos. Una op¬eración en el anillo del texto plano puede conllevar numerosas operaciones en el anillo del texto encriptado. Por esta razón, están surgiendo distintos planteamientos sobre como acelerar estos esquemas para un uso práctico. Una de las propuestas para acelerar los esquemas homomórficos consiste en el uso de High-Performance Computing (HPC) usando FPGAs (Field Programmable Gate Arrays). Una FPGA es un dispositivo semiconductor que contiene bloques de lógica cuya interconexión y funcionalidad puede ser reprogramada. Al compilar para FPGAs, se genera un circuito hardware específico para el algorithmo proporcionado, en lugar de hacer uso de instrucciones en una máquina universal, lo que supone una gran ventaja con respecto a CPUs. Las FPGAs tienen, por tanto, claras difrencias con respecto a CPUs: -Arquitectura en pipeline: permite la obtención de outputs sucesivos en tiempo constante -Posibilidad de tener multiples pipes para computación concurrente/paralela. Así, en este proyecto: -Se realizan diferentes implementaciones de esquemas homomórficos en sistemas basados en FPGAs. -Se analizan y estudian las ventajas y desventajas de los esquemas criptográficos en sistemas basados en FPGAs, comparando con proyectos relacionados. -Se comparan las implementaciones con trabajos relacionados New cloud-based technologies, the internet of things or "as a service" trends are based in data storage and processing in a remote server. In order to guarantee a secure communication and handling of data, cryptographic schemes are used. Tradi¬tionally, these cryptographic schemes focus on guaranteeing the security of data while storing and transferring it, not while operating with it. Therefore, once the server has to operate with that encrypted data, it first decrypts it, exposing unencrypted data to intruders in the server. Moreover, the whole traditional scheme is based on the assumption the server is reliable, giving it enough credentials to decipher data to process it. As a possible solution for this issues, fully homomorphic encryption(FHE) schemes is introduced. A fully homomorphic scheme does not require data decryption to operate, but rather operates over the cyphertext ring, keeping an homomorphism between the cyphertext ring and the plaintext ring. As a result, an outsider could only obtain encrypted data, making it impossible to retrieve the actual sensitive data without its associated cypher keys. However, using homomorphic encryption(HE) schemes impacts performance dras-tically, slowing it down. One operation in the plaintext space can lead to several operations in the cyphertext space. Because of this, different approaches address the problem of speeding up these schemes in order to become practical. One of these approaches consists in the use of High-Performance Computing (HPC) using FPGAs (Field Programmable Gate Array). An FPGA is an integrated circuit designed to be configured by a customer or a designer after manufacturing - hence "field-programmable". Compiling into FPGA means generating a circuit (hardware) specific for that algorithm, instead of having an universal machine and generating a set of machine instructions. FPGAs have, thus, clear differences compared to CPUs: - Pipeline architecture, which allows obtaining successive outputs in constant time. -Possibility of having multiple pipes for concurrent/parallel computation. Thereby, In this project: -We present different implementations of FHE schemes in FPGA-based systems. -We analyse and study advantages and drawbacks of the implemented FHE schemes, compared to related work.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Most of the moveable bridges use open grid steel decks, because these are factory assembled, light-weight, and easy to install. Open grid steel decks, however, are not as skid resistant as solid decks. Costly maintenance, high noise levels, poor riding comfort and susceptibility to vibrations are among the other disadvantages of these decks. The major objective of this research was to develop alternative deck systems which weigh no more than 25 lb/ft2, have solid riding surface, are no more than 4–5 in. thick and are able to withstand prescribed loading. Three deck systems were considered in this study: ultra-high performance concrete (UHPC) deck, aluminum deck and UHPC-fiber reinforced polymer (FRP) tube deck. UHPC deck was the first alternative system developed as a part of this project. Due to its ultra high strength, this type of concrete results in thinner sections, which helps satisfy the strict self-weight limit. A comprehensive experimental and analytical evaluation of the system was carried out to establish its suitability. Both single and multi-unit specimens with one or two spans were tested for static and dynamic loading. Finite element models were developed to predict the deck behavior. The study led to the conclusion that the UHPC bridge deck is a feasible alternative to open grid steel deck. Aluminum deck was the second alternative system studied in this project. A detailed experimental and analytical evaluation of the system was carried out. The experimental work included static and dynamic loading on the deck panels and connections. Analytical work included detailed finite element modeling. Based on the in-depth experimental and analytical evaluations, it was concluded that aluminum deck was a suitable alternative to open grid steel decks and is ready for implementation. UHPC-FRP tube deck was the third system developed in this research. Prestressed hollow core decks are commonly used, but the proposed type of steel-free deck is quite novel. Preliminary experimental evaluations of two simple-span specimens, one with uniform section and the other with tapered section were carried out. The system was shown to have good promise to replace the conventional open grid decks. Additional work, however, is needed before the system is recommended for field application.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this dissertation, we develop a novel methodology for characterizing and simulating nonstationary, full-field, stochastic turbulent wind fields.

In this new method, nonstationarity is characterized and modeled via temporal coherence, which is quantified in the discrete frequency domain by probability distributions of the differences in phase between adjacent Fourier components.

The empirical distributions of the phase differences can also be extracted from measured data, and the resulting temporal coherence parameters can quantify the occurrence of nonstationarity in empirical wind data.

This dissertation (1) implements temporal coherence in a desktop turbulence simulator, (2) calibrates empirical temporal coherence models for four wind datasets, and (3) quantifies the increase in lifetime wind turbine loads caused by temporal coherence.

The four wind datasets were intentionally chosen from locations around the world so that they had significantly different ambient atmospheric conditions.

The prevalence of temporal coherence and its relationship to other standard wind parameters was modeled through empirical joint distributions (EJDs), which involved fitting marginal distributions and calculating correlations.

EJDs have the added benefit of being able to generate samples of wind parameters that reflect the characteristics of a particular site.

Lastly, to characterize the effect of temporal coherence on design loads, we created four models in the open-source wind turbine simulator FAST based on the \windpact turbines, fit response surfaces to them, and used the response surfaces to calculate lifetime turbine responses to wind fields simulated with and without temporal coherence.

The training data for the response surfaces was generated from exhaustive FAST simulations that were run on the high-performance computing (HPC) facilities at the National Renewable Energy Laboratory.

This process was repeated for wind field parameters drawn from the empirical distributions and for wind samples drawn using the recommended procedure in the wind turbine design standard \iec.

The effect of temporal coherence was calculated as a percent increase in the lifetime load over the base value with no temporal coherence.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The Model for Prediction Across Scales (MPAS) is a novel set of Earth system simulation components and consists of an atmospheric model, an ocean model and a land-ice model. Its distinct features are the use of unstructured Voronoi meshes and C-grid discretisation to address shortcomings of global models on regular grids and the use of limited area models nested in a forcing data set, with respect to parallel scalability, numerical accuracy and physical consistency. This concept allows one to include the feedback of regional land use information on weather and climate at local and global scales in a consistent way, which is impossible to achieve with traditional limited area modelling approaches. Here, we present an in-depth evaluation of MPAS with regards to technical aspects of performing model runs and scalability for three medium-size meshes on four different high-performance computing (HPC) sites with different architectures and compilers. We uncover model limitations and identify new aspects for the model optimisation that are introduced by the use of unstructured Voronoi meshes. We further demonstrate the model performance of MPAS in terms of its capability to reproduce the dynamics of the West African monsoon (WAM) and its associated precipitation in a pilot study. Constrained by available computational resources, we compare 11-month runs for two meshes with observations and a reference simulation from the Weather Research and Forecasting (WRF) model. We show that MPAS can reproduce the atmospheric dynamics on global and local scales in this experiment, but identify a precipitation excess for the West African region. Finally, we conduct extreme scaling tests on a global 3?km mesh with more than 65 million horizontal grid cells on up to half a million cores. We discuss necessary modifications of the model code to improve its parallel performance in general and specific to the HPC environment. We confirm good scaling (70?% parallel efficiency or better) of the MPAS model and provide numbers on the computational requirements for experiments with the 3?km mesh. In doing so, we show that global, convection-resolving atmospheric simulations with MPAS are within reach of current and next generations of high-end computing facilities.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper, we develop a fast implementation of an hyperspectral coded aperture (HYCA) algorithm on different platforms using OpenCL, an open standard for parallel programing on heterogeneous systems, which includes a wide variety of devices, from dense multicore systems from major manufactures such as Intel or ARM to new accelerators such as graphics processing units (GPUs), field programmable gate arrays (FPGAs), the Intel Xeon Phi and other custom devices. Our proposed implementation of HYCA significantly reduces its computational cost. Our experiments have been conducted using simulated data and reveal considerable acceleration factors. This kind of implementations with the same descriptive language on different architectures are very important in order to really calibrate the possibility of using heterogeneous platforms for efficient hyperspectral imaging processing in real remote sensing missions.