974 resultados para Dynamic testing


Relevância:

30.00% 30.00%

Publicador:

Resumo:

This research explores Bayesian updating as a tool for estimating parameters probabilistically by dynamic analysis of data sequences. Two distinct Bayesian updating methodologies are assessed. The first approach focuses on Bayesian updating of failure rates for primary events in fault trees. A Poisson Exponentially Moving Average (PEWMA) model is implemnented to carry out Bayesian updating of failure rates for individual primary events in the fault tree. To provide a basis for testing of the PEWMA model, a fault tree is developed based on the Texas City Refinery incident which occurred in 2005. A qualitative fault tree analysis is then carried out to obtain a logical expression for the top event. A dynamic Fault Tree analysis is carried out by evaluating the top event probability at each Bayesian updating step by Monte Carlo sampling from posterior failure rate distributions. It is demonstrated that PEWMA modeling is advantageous over conventional conjugate Poisson-Gamma updating techniques when failure data is collected over long time spans. The second approach focuses on Bayesian updating of parameters in non-linear forward models. Specifically, the technique is applied to the hydrocarbon material balance equation. In order to test the accuracy of the implemented Bayesian updating models, a synthetic data set is developed using the Eclipse reservoir simulator. Both structured grid and MCMC sampling based solution techniques are implemented and are shown to model the synthetic data set with good accuracy. Furthermore, a graphical analysis shows that the implemented MCMC model displays good convergence properties. A case study demonstrates that Likelihood variance affects the rate at which the posterior assimilates information from the measured data sequence. Error in the measured data significantly affects the accuracy of the posterior parameter distributions. Increasing the likelihood variance mitigates random measurement errors, but casuses the overall variance of the posterior to increase. Bayesian updating is shown to be advantageous over deterministic regression techniques as it allows for incorporation of prior belief and full modeling uncertainty over the parameter ranges. As such, the Bayesian approach to estimation of parameters in the material balance equation shows utility for incorporation into reservoir engineering workflows.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

With the importance of renewable energy well-established worldwide, and targets of such energy quantified in many cases, there exists a considerable interest in the assessment of wind and wave devices. While the individual components of these devices are often relatively well understood and the aspects of energy generation well researched, there seems to be a gap in the understanding of these devices as a whole and especially in the field of their dynamic responses under operational conditions. The mathematical modelling and estimation of their dynamic responses are more evolved but research directed towards testing of these devices still requires significant attention. Model-free indicators of the dynamic responses of these devices are important since it reflects the as-deployed behaviour of the devices when the exposure conditions are scaled reasonably correctly, along with the structural dimensions. This paper demonstrates how the Hurst exponent of the dynamic responses of a monopile exposed to different exposure conditions in an ocean wave basin can be used as a model-free indicator of various responses. The scaled model is exposed to Froude scaled waves and tested under different exposure conditions. The analysis and interpretation is carried out in a model-free and output-only environment, with only some preliminary ideas regarding the input of the system. The analysis indicates how the Hurst exponent can be an interesting descriptor to compare and contrast various scenarios of dynamic response conditions.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Tuned liquid column dampers have been proved to be successful in mitigating the dynamic responses of civil infrastructure. There have been some recent applications of this concept on wind turbines and this passive control system can help to mitigate responses of offshore floating platforms and wave devices. The control of dynamic responses of these devices is important for reducing loads on structural elements and facilitating operations and maintenance (O&M) activities. This paper outlines the use of a tuned single liquid column damper for the control of a tension leg platform supported wind turbine. Theoretical studies were carried out and a scaled model was tested in a wave basin to assess the performance of the damper. The tests on the model presented in this paper correspond to a platform with a very low natural frequency for surge, sway and yaw motions. For practical purposes, it was not possible to tune the liquid damper exactly to this frequency. The consequent approach taken and the efficiency of such approach are presented in this paper. Responses to waves of a single frequency are investigated along with responses obtained from wave spectra characterising typical sea states. The extent of control is quantified using peak and root mean squared dynamic responses respectively. The tests present some guidelines and challenges for testing scaled devices in relation to including response control mechanisms. Additionally, the results provide a basis for dictating future research on tuned liquid column damper based control on floating platforms.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

People go through their life making all kinds of decisions, and some of these decisions affect their demand for transportation, for example, their choices of where to live and where to work, how and when to travel and which route to take. Transport related choices are typically time dependent and characterized by large number of alternatives that can be spatially correlated. This thesis deals with models that can be used to analyze and predict discrete choices in large-scale networks. The proposed models and methods are highly relevant for, but not limited to, transport applications. We model decisions as sequences of choices within the dynamic discrete choice framework, also known as parametric Markov decision processes. Such models are known to be difficult to estimate and to apply to make predictions because dynamic programming problems need to be solved in order to compute choice probabilities. In this thesis we show that it is possible to explore the network structure and the flexibility of dynamic programming so that the dynamic discrete choice modeling approach is not only useful to model time dependent choices, but also makes it easier to model large-scale static choices. The thesis consists of seven articles containing a number of models and methods for estimating, applying and testing large-scale discrete choice models. In the following we group the contributions under three themes: route choice modeling, large-scale multivariate extreme value (MEV) model estimation and nonlinear optimization algorithms. Five articles are related to route choice modeling. We propose different dynamic discrete choice models that allow paths to be correlated based on the MEV and mixed logit models. The resulting route choice models become expensive to estimate and we deal with this challenge by proposing innovative methods that allow to reduce the estimation cost. For example, we propose a decomposition method that not only opens up for possibility of mixing, but also speeds up the estimation for simple logit models, which has implications also for traffic simulation. Moreover, we compare the utility maximization and regret minimization decision rules, and we propose a misspecification test for logit-based route choice models. The second theme is related to the estimation of static discrete choice models with large choice sets. We establish that a class of MEV models can be reformulated as dynamic discrete choice models on the networks of correlation structures. These dynamic models can then be estimated quickly using dynamic programming techniques and an efficient nonlinear optimization algorithm. Finally, the third theme focuses on structured quasi-Newton techniques for estimating discrete choice models by maximum likelihood. We examine and adapt switching methods that can be easily integrated into usual optimization algorithms (line search and trust region) to accelerate the estimation process. The proposed dynamic discrete choice models and estimation methods can be used in various discrete choice applications. In the area of big data analytics, models that can deal with large choice sets and sequential choices are important. Our research can therefore be of interest in various demand analysis applications (predictive analytics) or can be integrated with optimization models (prescriptive analytics). Furthermore, our studies indicate the potential of dynamic programming techniques in this context, even for static models, which opens up a variety of future research directions.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Thermal and fatigue cracking are the two of the major pavement distress phenomena that contribute significantly towards increased premature pavement failures in Ontario. This in turn puts a massive burden on the provincial budgets as the government spends huge sums of money on the repair and rehabilitation of roads every year. Governments therefore need to rethink and re-evaluate their current measures in order to prevent it in future. The main objectives of this study include: the investigation of fatigue distress of 11 contract samples at 10oC, 15oC, 20oC and 25oC and the use of crack-tip-opening-displacement (CTOD) requirements at temperatures other than 15oC; investigation of thermal and fatigue distress of the comparative analysis of 8 Ministry of Transportation (MTO) recovered and straight asphalt samples through double-edge-notched-tension test (DENT) and extended bending beam rheometry (EBBR); chemical testing of all samples though X-ray Fluorescence (XRF) and Fourier transform infrared analysis (FTIR); Dynamic Shear Rheometer (DSR) higher and intermediate temperature grading; and the case study of a local Kingston road. Majority of 11 contract samples showed satisfactory performance at all temperatures except one sample. Study of CTOD at various temperatures found a strong correlation between the two variables. All recovered samples showed poor performance in terms of their ability to resist thermal and fatigue distress relative to their corresponding straight asphalt as evident in DENT test and EBBR results. XRF and FTIR testing of all samples showed the addition of waste engine oil (WEO) to be the root cause of pavement failures. DSR high temperature grading showed superior performance of recovered binders relative to straight asphalt. The local Kingston road showed extensive signs of damage due to thermal and fatigue distress as evident from DENT test, EBBR results and pictures taken in the field. In the light of these facts, the use of waste engine oil and recycled asphalt in pavements should be avoided as these have been shown to cause premature failure in pavements. The DENT test existing CTOD requirements should be implemented at other temperatures in order to prevent the occurrences of premature pavement failures in future.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In the recent years, vibration-based structural damage identification has been subject of significant research in structural engineering. The basic idea of vibration-based methods is that damage induces mechanical properties changes that cause anomalies in the dynamic response of the structure, which measures allow to localize damage and its extension. Vibration measured data, such as frequencies and mode shapes, can be used in the Finite Element Model Updating in order to adjust structural parameters sensible at damage (e.g. Young’s Modulus). The novel aspect of this thesis is the introduction into the objective function of accurate measures of strains mode shapes, evaluated through FBG sensors. After a review of the relevant literature, the case of study, i.e. an irregular prestressed concrete beam destined for roofing of industrial structures, will be presented. The mathematical model was built through FE models, studying static and dynamic behaviour of the element. Another analytical model was developed, based on the ‘Ritz method’, in order to investigate the possible interaction between the RC beam and the steel supporting table used for testing. Experimental data, recorded through the contemporary use of different measurement techniques (optical fibers, accelerometers, LVDTs) were compared whit theoretical data, allowing to detect the best model, for which have been outlined the settings for the updating procedure.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A rapidly changing business environment has necessitated most small and medium sized enterprises with international ambitions to reconsider their sources of competitive advantage. To survive in the face of a changing business environment, firms should utilize their dynamic organizational capabilities as well as their internationalization capabilities. Firms develop a competitive advantage if they can exploit their unique organizational competences in a new or foreign market and also if they can acquire new capabilities as a result of engaging in foreign markets. The acquired capabilities from foreign locations enhance the existing capability portfolio of a firm with a desire to internationalize. The study combined the research streams of SME organizational dynamic capability and internationalization capability to build a complete picture on the existing knowledge. An intensive case study was used for empirically testing the theoretical framework of the study and compared with the literature on various organizational capability factors and internationalization capabilities. Sormay Oy was selected because it is a successful medium sized company operating in Finland in the manufacturing industry which has a high international profile. In addition, it has sufficient rate of growth in sales that warrants it to engage internationally in matters such as, acquisitions, joint ventures and partnerships. The key findings of the study suggests that, medium sized manufacturing firms have a set of core competences arising from their organizational capabilities which were identified to be employee know how and relationship with stakeholders which aid the firm in its quest for attaining competitive advantage, ensuring production flexibility and gaining benefits present in a network. In addition, internationalization capabilities were identified under both the RAT test and CAT test whereby the primary findings suggests that, firms that outperform their competitors produce products that meet specific customer and country requirements, foresee the pitfalls of imitation brought about by the foreign local companies and members of a particular network through joint ventures, acquisitions or partnerships as well as those firms that are capable to acquire new capabilities in the foreign markets and successfully use these acquired capabilities to enhance or renew their capability portfolio for their competitive advantage. Additional significant findings under internationalization capabilities were discovered whereby, Sormay Oy was able to develop a new market space for its products despite the difficult institutional environment present in Russia.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

With the increasing complexity of today's software, the software development process is becoming highly time and resource consuming. The increasing number of software configurations, input parameters, usage scenarios, supporting platforms, external dependencies, and versions plays an important role in expanding the costs of maintaining and repairing unforeseeable software faults. To repair software faults, developers spend considerable time in identifying the scenarios leading to those faults and root-causing the problems. While software debugging remains largely manual, it is not the case with software testing and verification. The goal of this research is to improve the software development process in general, and software debugging process in particular, by devising techniques and methods for automated software debugging, which leverage the advances in automatic test case generation and replay. In this research, novel algorithms are devised to discover faulty execution paths in programs by utilizing already existing software test cases, which can be either automatically or manually generated. The execution traces, or alternatively, the sequence covers of the failing test cases are extracted. Afterwards, commonalities between these test case sequence covers are extracted, processed, analyzed, and then presented to the developers in the form of subsequences that may be causing the fault. The hypothesis is that code sequences that are shared between a number of faulty test cases for the same reason resemble the faulty execution path, and hence, the search space for the faulty execution path can be narrowed down by using a large number of test cases. To achieve this goal, an efficient algorithm is implemented for finding common subsequences among a set of code sequence covers. Optimization techniques are devised to generate shorter and more logical sequence covers, and to select subsequences with high likelihood of containing the root cause among the set of all possible common subsequences. A hybrid static/dynamic analysis approach is designed to trace back the common subsequences from the end to the root cause. A debugging tool is created to enable developers to use the approach, and integrate it with an existing Integrated Development Environment. The tool is also integrated with the environment's program editors so that developers can benefit from both the tool suggestions, and their source code counterparts. Finally, a comparison between the developed approach and the state-of-the-art techniques shows that developers need only to inspect a small number of lines in order to find the root cause of the fault. Furthermore, experimental evaluation shows that the algorithm optimizations lead to better results in terms of both the algorithm running time and the output subsequence length.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this thesis, tool support is addressed for the combined disciplines of Model-based testing and performance testing. Model-based testing (MBT) utilizes abstract behavioral models to automate test generation, thus decreasing time and cost of test creation. MBT is a functional testing technique, thereby focusing on output, behavior, and functionality. Performance testing, however, is non-functional and is concerned with responsiveness and stability under various load conditions. MBPeT (Model-Based Performance evaluation Tool) is one such tool which utilizes probabilistic models, representing dynamic real-world user behavior patterns, to generate synthetic workload against a System Under Test and in turn carry out performance analysis based on key performance indicators (KPI). Developed at Åbo Akademi University, the MBPeT tool is currently comprised of a downloadable command-line based tool as well as a graphical user interface. The goal of this thesis project is two-fold: 1) to extend the existing MBPeT tool by deploying it as a web-based application, thereby removing the requirement of local installation, and 2) to design a user interface for this web application which will add new user interaction paradigms to the existing feature set of the tool. All phases of the MBPeT process will be realized via this single web deployment location including probabilistic model creation, test configurations, test session execution against a SUT with real-time monitoring of user configurable metric, and final test report generation and display. This web application (MBPeT Dashboard) is implemented with the Java programming language on top of the Vaadin framework for rich internet application development. The Vaadin framework handles the complicated web communications processes and front-end technologies, freeing developers to implement the business logic as well as the user interface in pure Java. A number of experiments are run in a case study environment to validate the functionality of the newly developed Dashboard application as well as the scalability of the solution implemented in handling multiple concurrent users. The results support a successful solution with regards to the functional and performance criteria defined, while improvements and optimizations are suggested to increase both of these factors.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In the landslide-prone area near the Nice international airport, southeastern France, an interdisciplinary approach is applied to develop realistic lithological/geometrical profiles and geotechnical/strength sub-seafloor models. Such models are indispensable for slope stability assessments using limit equilibrium or finite element methods. Regression analyses, based on the undrained shear strength (su) of intact gassy sediments are used to generate a sub-seafloor strength model based on 37 short dynamic and eight long static piezocone penetration tests, and laboratory experiments on one Calypso piston and 10 gravity cores. Significant strength variations were detected when comparing measurements from the shelf and the shelf break, with a significant drop in su to 5.5 kPa being interpreted as a weak zone at a depth between 6.5 and 8.5 m below seafloor (mbsf). Here, a 10% reduction of the in situ total unit weight compared to the surrounding sediments is found to coincide with coarse-grained layers that turn into a weak zone and detachment plane for former and present-day gravitational, retrogressive slide events, as seen in 2D chirp profiles. The combination of high-resolution chirp profiles and comprehensive geotechnical information allows us to compute enhanced 2D finite element slope stability analysis with undrained sediment response compared to previous 2D numerical and 3D limit equilibrium assessments. Those models suggest that significant portions (detachment planes at 20 m or even 55 mbsf) of the Quaternary delta and slope apron deposits may be mobilized. Given that factors of safety are equal or less than 1 when further considering the effect of free gas, a high risk for a landslide event of considerable size off Nice international airport is identified

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Thin film adhesion often determines microelectronic device reliability and it is therefore essential to have experimental techniques that accurately and efficiently characterize it. Laser-induced delamination is a novel technique that uses laser-generated stress waves to load thin films at high strain rates and extract the fracture toughness of the film/substrate interface. The effectiveness of the technique in measuring the interface properties of metallic films has been documented in previous studies. The objective of the current effort is to model the effect of residual stresses on the dynamic delamination of thin films. Residual stresses can be high enough to affect the crack advance and the mode mixity of the delimitation event, and must therefore be adequately modeled to make accurate and repeatable predictions of fracture toughness. The equivalent axial force and bending moment generated by the residual stresses are included in a dynamic, nonlinear finite element model of the delaminating film, and the impact of residual stresses on the final extent of the interfacial crack, the relative contribution of shear failure, and the deformed shape of the delaminated film is studied in detail. Another objective of the study is to develop techniques to address issues related to the testing of polymeric films. These type of films adhere well to silicon and the resulting crack advance is often much smaller than for metallic films, making the extraction of the interface fracture toughness more difficult. The use of an inertial layer which enhances the amount of kinetic energy trapped in the film and thus the crack advance is examined. It is determined that the inertial layer does improve the crack advance, although in a relatively limited fashion. The high interface toughness of polymer films often causes the film to fail cohesively when the crack front leaves the weakly bonded region and enters the strong interface. The use of a tapered pre-crack region that provides a more gradual transition to the strong interface is examined. The tapered triangular pre-crack geometry is found to be effective in reducing the stresses induced thereby making it an attractive option. We conclude by studying the impact of modifying the pre-crack geometry to enable the testing of multiple polymer films.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this work, we introduce a new class of numerical schemes for rarefied gas dynamic problems described by collisional kinetic equations. The idea consists in reformulating the problem using a micro-macro decomposition and successively in solving the microscopic part by using asymptotic preserving Monte Carlo methods. We consider two types of decompositions, the first leading to the Euler system of gas dynamics while the second to the Navier-Stokes equations for the macroscopic part. In addition, the particle method which solves the microscopic part is designed in such a way that the global scheme becomes computationally less expensive as the solution approaches the equilibrium state as opposite to standard methods for kinetic equations which computational cost increases with the number of interactions. At the same time, the statistical error due to the particle part of the solution decreases as the system approach the equilibrium state. This causes the method to degenerate to the sole solution of the macroscopic hydrodynamic equations (Euler or Navier-Stokes) in the limit of infinite number of collisions. In a last part, we will show the behaviors of this new approach in comparisons to standard Monte Carlo techniques for solving the kinetic equation by testing it on different problems which typically arise in rarefied gas dynamic simulations.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

People go through their life making all kinds of decisions, and some of these decisions affect their demand for transportation, for example, their choices of where to live and where to work, how and when to travel and which route to take. Transport related choices are typically time dependent and characterized by large number of alternatives that can be spatially correlated. This thesis deals with models that can be used to analyze and predict discrete choices in large-scale networks. The proposed models and methods are highly relevant for, but not limited to, transport applications. We model decisions as sequences of choices within the dynamic discrete choice framework, also known as parametric Markov decision processes. Such models are known to be difficult to estimate and to apply to make predictions because dynamic programming problems need to be solved in order to compute choice probabilities. In this thesis we show that it is possible to explore the network structure and the flexibility of dynamic programming so that the dynamic discrete choice modeling approach is not only useful to model time dependent choices, but also makes it easier to model large-scale static choices. The thesis consists of seven articles containing a number of models and methods for estimating, applying and testing large-scale discrete choice models. In the following we group the contributions under three themes: route choice modeling, large-scale multivariate extreme value (MEV) model estimation and nonlinear optimization algorithms. Five articles are related to route choice modeling. We propose different dynamic discrete choice models that allow paths to be correlated based on the MEV and mixed logit models. The resulting route choice models become expensive to estimate and we deal with this challenge by proposing innovative methods that allow to reduce the estimation cost. For example, we propose a decomposition method that not only opens up for possibility of mixing, but also speeds up the estimation for simple logit models, which has implications also for traffic simulation. Moreover, we compare the utility maximization and regret minimization decision rules, and we propose a misspecification test for logit-based route choice models. The second theme is related to the estimation of static discrete choice models with large choice sets. We establish that a class of MEV models can be reformulated as dynamic discrete choice models on the networks of correlation structures. These dynamic models can then be estimated quickly using dynamic programming techniques and an efficient nonlinear optimization algorithm. Finally, the third theme focuses on structured quasi-Newton techniques for estimating discrete choice models by maximum likelihood. We examine and adapt switching methods that can be easily integrated into usual optimization algorithms (line search and trust region) to accelerate the estimation process. The proposed dynamic discrete choice models and estimation methods can be used in various discrete choice applications. In the area of big data analytics, models that can deal with large choice sets and sequential choices are important. Our research can therefore be of interest in various demand analysis applications (predictive analytics) or can be integrated with optimization models (prescriptive analytics). Furthermore, our studies indicate the potential of dynamic programming techniques in this context, even for static models, which opens up a variety of future research directions.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Despite the development of improved performance test protocols by renowned researchers, there are still road networks which experience premature cracking and failure. One area of major concern in asphalt science and technology, especially in cold regions in Canada is thermal (low temperature) cracking. Usually right after winter periods, severe cracks are seen on poorly designed road networks. Quality assurance tests based on improved asphalt performance protocols have been implemented by government agencies to ensure that roads being constructed are at the required standard but asphalt binders that pass these quality assurance tests still crack prematurely. While it would be easy to question the competence of the quality assurance test protocols, it should be noted that performance tests which are being used and were repeated in this study, namely the extended bending beam rheometer (EBBR) test, double edge-notched tension test (DENT), dynamic shear rheometer (DSR) test and X-ray fluorescence (XRF) analysis have all been verified and proven to successfully predict asphalt pavement behaviour in the field. Hence this study looked to probe and test the quality and authenticity of the asphalt binders being used for road paving. This study covered thermal cracking and physical hardening phenomenon by comparing results from testing asphalt binder samples obtained from the storage ‘tank’ prior to paving (tank samples) and recovered samples for the same contracts with aim of explaining why asphalt binders that have passed quality assurance tests are still prone to fail prematurely. The study also attempted to find out if the short testing time and automated procedure of torsion bar experiments can replace the established but tedious procedure of the EBBR. In the end, it was discovered that significant differences in performance and composition exist between tank and recovered samples for the same contracts. Torsion bar experimental data also indicated some promise in predicting physical hardening.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Current data indicate that the size of high-density lipoprotein (HDL) may be considered an important marker for cardiovascular disease risk. We established reference values of mean HDL size and volume in an asymptomatic representative Brazilian population sample (n=590) and their associations with metabolic parameters by gender. Size and volume were determined in HDL isolated from plasma by polyethyleneglycol precipitation of apoB-containing lipoproteins and measured using the dynamic light scattering (DLS) technique. Although the gender and age distributions agreed with other studies, the mean HDL size reference value was slightly lower than in some other populations. Both HDL size and volume were influenced by gender and varied according to age. HDL size was associated with age and HDL-C (total population); non- white ethnicity and CETP inversely (females); HDL-C and PLTP mass (males). On the other hand, HDL volume was determined only by HDL-C (total population and in both genders) and by PLTP mass (males). The reference values for mean HDL size and volume using the DLS technique were established in an asymptomatic and representative Brazilian population sample, as well as their related metabolic factors. HDL-C was a major determinant of HDL size and volume, which were differently modulated in females and in males.