882 resultados para Fluid-dynamic analysis
Resumo:
A postbuckling blade-stiffened composite panel was loaded in uniaxial compression, until failure. During loading beyond initial buckling, this panel was observed to undergo a secondary instability characterised by a dynamic mode shape change. These abrupt changes cause considerable numerical difficulties using standard path-following quasi-static solution procedures in finite element analysis. Improved methods such as the arc-length-related procedures do better at traversing certain critical points along an equilibrium path but these procedures may also encounter difficulties in highly non-linear problems. This paper presents a robust, modified explicit dynamic analysis for the modelling of postbuckling structures. This method was shown to predict the mode-switch with good accuracy and is more efficient than standard explicit dynamic analysis. (C) 2003 Elsevier Science Ltd. All rights reserved.
Resumo:
The inherent difficulty of thread-based shared-memory programming has recently motivated research in high-level, task-parallel programming models. Recent advances of Task-Parallel models add implicit synchronization, where the system automatically detects and satisfies data dependencies among spawned tasks. However, dynamic dependence analysis incurs significant runtime overheads, because the runtime must track task resources and use this information to schedule tasks while avoiding conflicts and races.
We present SCOOP, a compiler that effectively integrates static and dynamic analysis in code generation. SCOOP combines context-sensitive points-to, control-flow, escape, and effect analyses to remove redundant dependence checks at runtime. Our static analysis can work in combination with existing dynamic analyses and task-parallel runtimes that use annotations to specify tasks and their memory footprints. We use our static dependence analysis to detect non-conflicting tasks and an existing dynamic analysis to handle the remaining dependencies. We evaluate the resulting hybrid dependence analysis on a set of task-parallel programs.
Resumo:
Thermal comfort is defined as “that condition of mind which expresses satisfaction with the thermal environment’ [1] [2]. Field studies have been completed in order to establish the governing conditions for thermal comfort [3]. These studies showed that the internal climate of a room was the strongest factor in establishing thermal comfort. Direct manipulation of the internal climate is necessary to retain an acceptable level of thermal comfort. In order for Building Energy Management Systems (BEMS) strategies to be efficiently utilised it is necessary to have the ability to predict the effect that activating a heating/cooling source (radiators, windows and doors) will have on the room. The numerical modelling of the domain can be challenging due to necessity to capture temperature stratification and/or different heat sources (radiators, computers and human beings). Computational Fluid Dynamic (CFD) models are usually utilised for this function because they provide the level of details required. Although they provide the necessary level of accuracy these models tend to be highly computationally expensive especially when transient behaviour needs to be analysed. Consequently they cannot be integrated in BEMS. This paper presents and describes validation of a CFD-ROM method for real-time simulations of building thermal performance. The CFD-ROM method involves the automatic extraction and solution of reduced order models (ROMs) from validated CFD simulations. The test case used in this work is a room of the Environmental Research Institute (ERI) Building at the University College Cork (UCC). ROMs have shown that they are sufficiently accurate with a total error of less than 1% and successfully retain a satisfactory representation of the phenomena modelled. The number of zones in a ROM defines the size and complexity of that ROM. It has been observed that ROMs with a higher number of zones produce more accurate results. As each ROM has a time to solution of less than 20 seconds they can be integrated into the BEMS of a building which opens the potential to real time physics based building energy modelling.
Resumo:
The Computational Fluid Dynamic (CFD) toolbox OpenFOAM is used to assess the applicability of Reynolds-Averaged Navier-Stokes (RANS) solvers to the simulation of Oscillating Wave Surge Converters (OWSC) in significant waves. Simulation of these flap type devices requires the solution of the equations of motion and the representation of the OWSC’s motion in a moving mesh. A new way to simulate the sea floor inside a section of the moving mesh with a moving dissipation zone is presented. To assess the accuracy of the new solver, experiments are conducted in regular and irregular wave traces for a full three dimensional model. Results of acceleration and flow features are presented for numerical and experimental data. It is found that the new numerical model reproduces experimental results within the bounds of experimental accuracy.
Resumo:
This paper describes the results of non-linear elasto-plastic implicit dynamic finite element analyses that are used to predict the collapse behaviour of cold-formed steel portal frames at elevated temperatures. The collapse behaviour of a simple rigid-jointed beam idealisation and a more accurate semi-rigid jointed shell element idealisation are compared for two different fire scenarios. For the case of the shell element idealisation, the semi-rigidity of the cold-formed steel joints is explicitly taken into account through modelling of the bolt-hole elongation stiffness. In addition, the shell element idealisation is able to capture buckling of the cold-formed steel sections in the vicinity of the joints. The shell element idealisation is validated at ambient temperature against the results of full-scale tests reported in the literature. The behaviour at elevated temperatures is then considered for both the semi-rigid jointed shell and rigid-jointed beam idealisations. The inclusion of accurate joint rigidity and geometric non-linearity (second order analysis) are shown to affect the collapse behaviour at elevated temperatures. For each fire scenario considered, the importance of base fixity in preventing an undesirable outwards collapse mechanism is demonstrated. The results demonstrate that joint rigidity and varying fire scenarios should be considered in order to allow for conservative design.
Resumo:
N-gram analysis is an approach that investigates the structure of a program using bytes, characters or text strings. This research uses dynamic analysis to investigate malware detection using a classification approach based on N-gram analysis. A key issue with dynamic analysis is the length of time a program has to be run to ensure a correct classification. The motivation for this research is to find the optimum subset of operational codes (opcodes) that make the best indicators of malware and to determine how long a program has to be monitored to ensure an accurate support vector machine (SVM) classification of benign and malicious software. The experiments within this study represent programs as opcode density histograms gained through dynamic analysis for different program run periods. A SVM is used as the program classifier to determine the ability of different program run lengths to correctly determine the presence of malicious software. The findings show that malware can be detected with different program run lengths using a small number of opcodes
Resumo:
N-gram analysis is an approach that investigates the structure of a program using bytes, characters or text strings. This research uses dynamic analysis to investigate malware detection using a classification approach based on N-gram analysis. The motivation for this research is to find a subset of Ngram features that makes a robust indicator of malware. The experiments within this paper represent programs as N-gram density histograms, gained through dynamic analysis. A Support Vector Machine (SVM) is used as the program classifier to determine the ability of N-grams to correctly determine the presence of malicious software. The preliminary findings show that an N-gram size N=3 and N=4 present the best avenues for further analysis.
Resumo:
This paper presents a study on concrete fracture and the associated mesh sensitivity using the finite element (FE) method with a local concrete model in both tension (Mode I) and compression.To enable the incorporation of dynamic loading, the FE model is developed using a transient dynamic analysis code LS-DYNA Explicit.A series of investigations have been conducted on typical fracture scenarios to evaluate the model performances and calibration of relevant parameters.The K&C damage model was adopted because it is a comprehensive local concrete model which allows the user to change the crack band width, fracture energy and rate dependency of the material.Compressive localisation modelling in numerical modelling is also discussed in detail in relation to localisation.An impact test specimen is modelled.
Resumo:
Double Skin Façades (DSFs) are becoming increasingly popular architecture for commercial office buildings. Although DSFs are widely accepted to have the capacity to offer significant passive benefits and enable low energy building performance, there remains a paucity of knowledge with regard to their operation. Identification of the most determinant architectural parameters of DSFs is the focus of ongoing research. This paper presents an experimental and simulation study of a DSF installed on a commercial building in Dublin, Ireland. The DSF is south facing and acts to buffer the building from winter heat losses, but risks enhancing over-heating on sunny days. The façade is extensively monitored during winter months. Computational Fluid Dynamic (CFD) models are used to simulate the convective operation of the DSF. This research concludes DSFs as suited for passive, low energy architecture in temperature climates such as Ireland but identifies issues requiring attention in DSF design.
Resumo:
The research presented, investigates the optimal set of operational codes (opcodes) that create a robust indicator of malicious software (malware) and also determines a program’s execution duration for accurate classification of benign and malicious software. The features extracted from the dataset are opcode density histograms, extracted during the program execution. The classifier used is a support vector machine and is configured to select those features to produce the optimal classification of malware over different program run lengths. The findings demonstrate that malware can be detected using dynamic analysis with relatively few opcodes.
Resumo:
Motivados pelo propósito central de contribuir para a construção, a longo prazo, de um sistema completo de conversão de texto para fala, baseado em síntese articulatória, desenvolvemos um modelo linguístico para o português europeu (PE), com base no sistema TADA (TAsk Dynamic Application), que visou a obtenção automática da trajectória dos articuladores a partir do texto de entrada. A concretização deste objectivo ditou o desenvolvimento de um conjunto de tarefas, nomeadamente 1) a implementação e avaliação de dois sistemas de silabificação automática e de transcrição fonética, tendo em vista a transformação do texto de entrada num formato adequado ao TADA; 2) a criação de um dicionário gestual para os sons do PE, de modo a que cada fone obtido à saída do conversor grafema-fone pudesse ter correspondência com um conjunto de gestos articulatórios adaptados para o PE; 3) a análise do fenómeno da nasalidade à luz dos princípios dinâmicos da Fonologia Articulatória (FA), com base num estudo articulatório e perceptivo. Os dois algoritmos de silabificação automática implementados e testados fizeram apelo a conhecimentos de natureza fonológica sobre a estrutura da sílaba, sendo o primeiro baseado em transdutores de estados finitos e o segundo uma implementação fiel das propostas de Mateus & d'Andrade (2000). O desempenho destes algoritmos – sobretudo do segundo – mostrou-se similar ao de outros sistemas com as mesmas potencialidades. Quanto à conversão grafema-fone, seguimos uma metodologia baseada em regras de reescrita combinada com uma técnica de aprendizagem automática. Os resultados da avaliação deste sistema motivaram a exploração posterior de outros métodos automáticos, procurando também avaliar o impacto da integração de informação silábica nos sistemas. A descrição dinâmica dos sons do PE, ancorada nos princípios teóricos e metodológicos da FA, baseou-se essencialmente na análise de dados de ressonância magnética, a partir dos quais foram realizadas todas as medições, com vista à obtenção de parâmetros articulatórios quantitativos. Foi tentada uma primeira validação das várias configurações gestuais propostas, através de um pequeno teste perceptual, que permitiu identificar os principais problemas subjacentes à proposta gestual. Este trabalho propiciou, pela primeira vez para o PE, o desenvolvimento de um primeiro sistema de conversão de texto para fala, de base articulatória. A descrição dinâmica das vogais nasais contou, quer com os dados de ressonância magnética, para caracterização dos gestos orais, quer com os dados obtidos através de articulografia electromagnética (EMA), para estudo da dinâmica do velo e da sua relação com os restantes articuladores. Para além disso, foi efectuado um teste perceptivo, usando o TADA e o SAPWindows, para avaliar a sensibilidade dos ouvintes portugueses às variações na altura do velo e alterações na coordenação intergestual. Este estudo serviu de base a uma interpretação abstracta (em termos gestuais) das vogais nasais do PE e permitiu também esclarecer aspectos cruciais relacionados com a sua produção e percepção.
Resumo:
Observation-based slicing is a recently-introduced, language-independent, slicing technique based on the dependencies observable from program behaviour. Due to the wellknown limits of dynamic analysis, we may only compute an under-approximation of the true observation-based slice. However, because the observation-based slice captures all possible dependence that can be observed, even such approximations can yield insight into the limitations of static slicing. For example, a static slice, S that is strictly smaller than the corresponding observation based slice is guaranteed to be unsafe. We present the results of three sets of experiments on 12 different programs, including benchmarks and larger programs, which investigate the relationship between static and observation-based slicing. We show that, in extreme cases, observation-based slices can find the true static minimal slice, where static techniques cannot. For more typical cases, our results illustrate the potential for observation-based slicing to highlight unsafe static slices. Finally, we report on the sensitivity of observation-based slicing to test quality.
Resumo:
Damage assessment of structures with a mechanical non linear model demands the representation of seismic action in terms of an accelerogram (dynamic analysis) or a response spectrum (pushover analysis). Stochastic ground motion simulation is largely used in regions where seismic strong-motion records are available in insufficient number. In this work we present a variation of the stochastic finite-fault method with dynamic corner frequency that includes the geological site effects. The method was implemented in a computer program named SIMULSIS that generate time series (accelerograms) and response spectra. The program was tested with the MW= 7.3 Landers earthquake (June 28, 1992) and managed to reproduce its effects. In the present work we used it to reproduce the effects of the 1980’s Azores earthquake (January 1, 1980) in several islands, with different possible local site conditions. In those places, the response spectra are presented and compared with the buildings damage observed.
Resumo:
Trabalho Final de Mestrado para obtenção do grau de Mestre em Engenharia Mecânica
Resumo:
Dissertação para obtenção do grau de Mestre em Engenharia Civil na Área de Especialização em Estruturas