923 resultados para implicit dynamic analysis
Resumo:
With the increasing complexity of today's software, the software development process is becoming highly time and resource consuming. The increasing number of software configurations, input parameters, usage scenarios, supporting platforms, external dependencies, and versions plays an important role in expanding the costs of maintaining and repairing unforeseeable software faults. To repair software faults, developers spend considerable time in identifying the scenarios leading to those faults and root-causing the problems. While software debugging remains largely manual, it is not the case with software testing and verification. The goal of this research is to improve the software development process in general, and software debugging process in particular, by devising techniques and methods for automated software debugging, which leverage the advances in automatic test case generation and replay. In this research, novel algorithms are devised to discover faulty execution paths in programs by utilizing already existing software test cases, which can be either automatically or manually generated. The execution traces, or alternatively, the sequence covers of the failing test cases are extracted. Afterwards, commonalities between these test case sequence covers are extracted, processed, analyzed, and then presented to the developers in the form of subsequences that may be causing the fault. The hypothesis is that code sequences that are shared between a number of faulty test cases for the same reason resemble the faulty execution path, and hence, the search space for the faulty execution path can be narrowed down by using a large number of test cases. To achieve this goal, an efficient algorithm is implemented for finding common subsequences among a set of code sequence covers. Optimization techniques are devised to generate shorter and more logical sequence covers, and to select subsequences with high likelihood of containing the root cause among the set of all possible common subsequences. A hybrid static/dynamic analysis approach is designed to trace back the common subsequences from the end to the root cause. A debugging tool is created to enable developers to use the approach, and integrate it with an existing Integrated Development Environment. The tool is also integrated with the environment's program editors so that developers can benefit from both the tool suggestions, and their source code counterparts. Finally, a comparison between the developed approach and the state-of-the-art techniques shows that developers need only to inspect a small number of lines in order to find the root cause of the fault. Furthermore, experimental evaluation shows that the algorithm optimizations lead to better results in terms of both the algorithm running time and the output subsequence length.
Resumo:
degli elementi vegetali nella dinamica e nella dispersione degli inquinanti nello street canyon urbano. In particolare, è stato analizzata la risposta fluidodinamica di cespugli con altezze diverse e di alberi con porosità e altezza del tronco varianti. Il modello analizzato consiste in due edifici di altezza e larghezza pari ad H e lunghezza di 10H, tra i quali corre una strada in cui sono stati modellizati una sorgente rappresentativa del traffico veicolare e, ai lati, due linee di componenti vegetali. Le simulazioni sono state fatte con ANSYS Fluent, un software di "Computational Fluid Dynamics"(CFD) che ha permesso di modellizare la dinamica dei flussi e di simulare le concentrazioni emesse dalla sorgente di CO posta lungo la strada. Per la simulazione è stato impiegato un modello RANS a chiusura k-epsilon, che permette di parametrizzare i momenti secondi nell'equazione di Navier Stokes per permettere una loro più facile risoluzione. I risultati sono stati espressi in termini di profili di velocità e concentrazione molare di CO, unitamente al calcolo della exchange velocity per quantificare gli scambi tra lo street canyon e l'esterno. Per quanto riguarda l'influenza dell'altezza dei tronchi è stata riscontrata una tendenza non lineare tra di essi e la exchange velocity. Analizzando invece la altezza dei cespugli è stato visto che all'aumentare della loro altezza esiste una relazione univoca con l'abbassamento della exchange velocity. Infine, andando a variare la permeabilità delle chiome degli alberi è stata trovatta una variazione non monotonica che correla la exchange velocity con il parametro C_2, che è stata interpretata attraverso i diversi andamenti dei profili sopravento e sottovento. In conclusione, allo stadio attuale della ricerca presentata in questa tesi, non è ancora possibile correlare direttamente la exchange velocity con alcun parametro analizzato.
Resumo:
This paper describes the results of non-linear elasto-plastic implicit dynamic finite element analyses that are used to predict the collapse behaviour of cold-formed steel portal frames at elevated temperatures. The collapse behaviour of a simple rigid-jointed beam idealisation and a more accurate semi-rigid jointed shell element idealisation are compared for two different fire scenarios. For the case of the shell element idealisation, the semi-rigidity of the cold-formed steel joints is explicitly taken into account through modelling of the bolt-hole elongation stiffness. In addition, the shell element idealisation is able to capture buckling of the cold-formed steel sections in the vicinity of the joints. The shell element idealisation is validated at ambient temperature against the results of full-scale tests reported in the literature. The behaviour at elevated temperatures is then considered for both the semi-rigid jointed shell and rigid-jointed beam idealisations. The inclusion of accurate joint rigidity and geometric non-linearity (second order analysis) are shown to affect the collapse behaviour at elevated temperatures. For each fire scenario considered, the importance of base fixity in preventing an undesirable outwards collapse mechanism is demonstrated. The results demonstrate that joint rigidity and varying fire scenarios should be considered in order to allow for conservative design.
Resumo:
A new C-0 composite plate finite element based on Reddy's third order theory is used for large deformation dynamic analysis of delaminated composite plates. The inter-laminar contact is modeled with an augmented Lagrangian approach. Numerical results show that the widely used ``unconditionally stable'' beta-Newmark method presents instability problems in the transient simulation of delaminated composite plate structures with large deformation. To overcome this instability issue, an energy and momentum conserving composite implicit time integration scheme presented by Bathe and Baig is used. It is found that a proper selection of the penalty parameter is very crucial in the contact simulation. (C) 2014 Elsevier Ltd. All rights reserved.
Resumo:
Time-domain models of marine structures based on frequency domain data are usually built upon the Cummins equation. This type of model is a vector integro-differential equation which involves convolution terms. These convolution terms are not convenient for analysis and design of motion control systems. In addition, these models are not efficient with respect to simulation time, and ease of implementation in standard simulation packages. For these reasons, different methods have been proposed in the literature as approximate alternative representations of the convolutions. Because the convolution is a linear operation, different approaches can be followed to obtain an approximately equivalent linear system in the form of either transfer function or state-space models. This process involves the use of system identification, and several options are available depending on how the identification problem is posed. This raises the question whether one method is better than the others. This paper therefore has three objectives. The first objective is to revisit some of the methods for replacing the convolutions, which have been reported in different areas of analysis of marine systems: hydrodynamics, wave energy conversion, and motion control systems. The second objective is to compare the different methods in terms of complexity and performance. For this purpose, a model for the response in the vertical plane of a modern containership is considered. The third objective is to describe the implementation of the resulting model in the standard simulation environment Matlab/Simulink.
Resumo:
In this paper a new parallel algorithm for nonlinear transient dynamic analysis of large structures has been presented. An unconditionally stable Newmark-beta method (constant average acceleration technique) has been employed for time integration. The proposed parallel algorithm has been devised within the broad framework of domain decomposition techniques. However, unlike most of the existing parallel algorithms (devised for structural dynamic applications) which are basically derived using nonoverlapped domains, the proposed algorithm uses overlapped domains. The parallel overlapped domain decomposition algorithm proposed in this paper has been formulated by splitting the mass, damping and stiffness matrices arises out of finite element discretisation of a given structure. A predictor-corrector scheme has been formulated for iteratively improving the solution in each step. A computer program based on the proposed algorithm has been developed and implemented with message passing interface as software development environment. PARAM-10000 MIMD parallel computer has been used to evaluate the performances. Numerical experiments have been conducted to validate as well as to evaluate the performance of the proposed parallel algorithm. Comparisons have been made with the conventional nonoverlapped domain decomposition algorithms. Numerical studies indicate that the proposed algorithm is superior in performance to the conventional domain decomposition algorithms. (C) 2003 Elsevier Ltd. All rights reserved.
Resumo:
In this paper the seismic slope stability analyses are performed for a typical section of 44 m high water retention type tailings earthen dam located in the eastern part of India, using both the conventional pseudo-static and recent pseudo-dynamic methods. The tailings earthen dam is analyzed for different upstream conditions of reservoir like filled up with compacted and non-compacted dumped waste materials with different water levels of the pond tailings portion. Phreatic surface is generated using seepage analysis in geotechnical software SEEP/W and that same is used in the pseudo-static and pseudo-dynamic analyses to make the approach more realistic. The minimum values of factor of safety using pseudo-static and pseudo-dynamic method are obtained as 1.18 and 1.09 respectively for the chosen seismic zone in India. These values of factor of safety show clearly the demerits of conventional pseudo-static analysis compared to recent pseudo-dynamic analysis, where in addition to the seismic accelerations, duration, frequency of earthquake, body waves traveling during earthquake and amplification effects are considered.
Resumo:
In this paper we seek to shed light on the mismatch between income poverty and deprivation through a comparative and dynamic analysis of both forms of disadvantage. By extending analysis over five waves of the ECHP we are able to take into account the key dimensions characterizing poverty profiles overtime. Our conclusions turn out to be remarkably stable across countries. While persistent income poverty measures are systematically related to both cross-sectional and longitudinal measures of deprivation, the scale of mismatch is no less at the latter than at the former level. There is some evidence that although rates of volatility for income and deprivation measures are roughly similar, the processes of change themselves are somewhat different. Further light is shed on the underlying processes by cross-classifying the forms of deprivation. Those exposed to both types of deprivation are differentiated from others in terms of need and resource variables. Conclusions relating to the socio-demographic influences on risk levels are influenced by choice and combination of indicators. The results of our analysis confirm the need to devote considerably more attention than heretofore to the analysis of multi-dimensional poverty dynamics.
Resumo:
This paper presents the numerical simulation of the ultimate behaviour of 85 one-way and two-way spanning laterally restrained concrete slabs of variable thickness, span, reinforcement ratio, strength and boundary conditions reported in literature by different authors. The developed numerical model was described and all the assumptions were illustrated. ABAQUS, a Finite Element Analysis suite of software, was employed. Non-linear implicit static general analysis method offered by ABAQUS was used. Other analysis methods were also discussed in general in terms of application such as Explicit Dynamic Analysis and Riks method. The aim is to demonstrate the ability and efficacy of FEA to simulate the ultimate load behaviour of slabs considering different material properties and boundary conditions. The authors intended to present a numerical model that provides consistent predictions of the ultimate behaviour of laterally restrained slabs that could be used as an alternative for expensive real life testing as well as for the design and assessment of new and existing structures respectively. The enhanced strength of laterally-restrained slabs compared with conventional design methods predictions is believed to be due to compressive membrane action (CMA). CMA is an inherent phenomenon of laterally restrained concrete beams/slabs. The numerical predictions obtained from the developed model were in good correlation with the experimental results and with those obtained from the CMA method developed at the Queen’s University Belfast, UK.
Resumo:
A large body of research analyzes the runtime execution of a system to extract abstract behavioral views. Those approaches primarily analyze control flow by tracing method execution events or they analyze object graphs of heap snapshots. However, they do not capture how objects are passed through the system at runtime. We refer to the exchange of objects as the object flow, and we claim that object flow is necessary to analyze if we are to understand the runtime of an object-oriented application. We propose and detail Object Flow Analysis, a novel dynamic analysis technique that takes this new information into account. To evaluate its usefulness, we present a visual approach that allows a developer to study classes and components in terms of how they exchange objects at runtime. We illustrate our approach on three case studies.
Resumo:
This research explores Bayesian updating as a tool for estimating parameters probabilistically by dynamic analysis of data sequences. Two distinct Bayesian updating methodologies are assessed. The first approach focuses on Bayesian updating of failure rates for primary events in fault trees. A Poisson Exponentially Moving Average (PEWMA) model is implemnented to carry out Bayesian updating of failure rates for individual primary events in the fault tree. To provide a basis for testing of the PEWMA model, a fault tree is developed based on the Texas City Refinery incident which occurred in 2005. A qualitative fault tree analysis is then carried out to obtain a logical expression for the top event. A dynamic Fault Tree analysis is carried out by evaluating the top event probability at each Bayesian updating step by Monte Carlo sampling from posterior failure rate distributions. It is demonstrated that PEWMA modeling is advantageous over conventional conjugate Poisson-Gamma updating techniques when failure data is collected over long time spans. The second approach focuses on Bayesian updating of parameters in non-linear forward models. Specifically, the technique is applied to the hydrocarbon material balance equation. In order to test the accuracy of the implemented Bayesian updating models, a synthetic data set is developed using the Eclipse reservoir simulator. Both structured grid and MCMC sampling based solution techniques are implemented and are shown to model the synthetic data set with good accuracy. Furthermore, a graphical analysis shows that the implemented MCMC model displays good convergence properties. A case study demonstrates that Likelihood variance affects the rate at which the posterior assimilates information from the measured data sequence. Error in the measured data significantly affects the accuracy of the posterior parameter distributions. Increasing the likelihood variance mitigates random measurement errors, but casuses the overall variance of the posterior to increase. Bayesian updating is shown to be advantageous over deterministic regression techniques as it allows for incorporation of prior belief and full modeling uncertainty over the parameter ranges. As such, the Bayesian approach to estimation of parameters in the material balance equation shows utility for incorporation into reservoir engineering workflows.
Resumo:
Terrorists usually target high occupancy iconic and public buildings using vehicle borne incendiary devices in order to claim a maximum number of lives and cause extensive damage to public property. While initial casualties are due to direct shock by the explosion, collapse of structural elements may extensively increase the total figure. Most of these buildings have been or are built without consideration of their vulnerability to such events. Therefore, the vulnerability and residual capacity assessment of buildings to deliberately exploded bombs is important to provide mitigation strategies to protect the buildings' occupants and the property. Explosive loads and their effects on a building have therefore attracted significant attention in the recent past. Comprehensive and economical design strategies must be developed for future construction. This research investigates the response and damage of reinforced concrete (RC) framed buildings together with their load bearing key structural components to a near field blast event. Finite element method (FEM) based analysis was used to investigate the structural framing system and components for global stability, followed by a rigorous analysis of key structural components for damage evaluation using the codes SAP2000 and LS DYNA respectively. The research involved four important areas in structural engineering. They are blast load determination, numerical modelling with FEM techniques, material performance under high strain rate and non-linear dynamic structural analysis. The response and damage of a RC framed building for different blast load scenarios were investigated. The blast influence region for a two dimensional RC frame was investigated for different load conditions and identified the critical region for each loading case. Two types of design methods are recommended for RC columns to provide superior residual capacities. They are RC columns detailing with multi-layer steel reinforcement cages and a composite columns including a central structural steel core. These are to provide post blast gravity load resisting capacity compared to typical RC column against a catastrophic collapse. Overall, this research broadens the current knowledge of blast and residual capacity analysis of RC framed structures and recommends methods to evaluate and mitigate blast impact on key elements of multi-storey buildings.
Resumo:
With the increasing importance of Application Domain Specific Processor (ADSP) design, a significant challenge is to identify special-purpose operations for implementation as a customized instruction. While many methodologies have been proposed for this purpose, they all work for a single algorithm chosen from the target application domain. Such algorithm-specific approaches are not suitable for designing instruction sets applicable to a whole family of related algorithms. For an entire range of related algorithms, this paper develops a methodology for identifying compound operations, as a basis for designing “domain-specific” Instruction Set Architectures (ISAs) that can efficiently run most of the algorithms in a given domain. Our methodology combines three different static analysis techniques to identify instruction sequences common to several related algorithms: identification of (non-branching) instruction sequences that occur commonly across the algorithms; identification of instruction sequences nested within iterative constructs that are thus executed frequently; and identification of commonly-occurring instruction sequences that span basic blocks. Choosing different combinations of these results enables us to design domain-specific special operations with different desired characteristics, such as performance or suitability as a library function. To demonstrate our approach, case studies are carried out for a family of thirteen string matching algorithms. Finally, the validity of our static analysis results is confirmed through independent dynamic analysis experiments and performance improvement measurements.
Resumo:
The details of development of the stiffness matrix for a doubly curved quadrilateral element suited for static and dynamic analysis of laminated anisotropic thin shells of revolution are reported. Expressing the assumed displacement state over the middle surface of the shell as products of one-dimensional first order Hermite polynomials, it is possible to ensure that the displacement state for the assembled set of such elements, is geometrically admissible. Monotonic convergence of total potential energy is therefore possible as the modelling is successively refined. Systematic evaluation of performance of the element is conducted, considering various examples for which analytical or other solutions are available.