178 resultados para Sheet-metal work - Simulation methods


Relevância:

30.00% 30.00%

Publicador:

Resumo:

There are large uncertainties in the aerothermodynamic modelling of super-orbital re-entry which impact the design of spacecraft thermal protection systems (TPS). Aspects of the thermal environment of super-orbital re-entry flows can be simulated in the laboratory using arc- and plasma jet facilities and these devices are regularly used for TPS certification work [5]. Another laboratory device which is capable of simulating certain critical features of both the aero and thermal environment of super-orbital re-entry is the expansion tube, and three such facilities have been operating at the University of Queensland in recent years[10]. Despite some success, wind tunnel tests do not achieve full simulation, however, a virtually complete physical simulation of particular re-entry conditions can be obtained from dedicated flight testing, and the Apollo era FIRE II flight experiment [2] is the premier example which still forms an important benchmark for modern simulations. Dedicated super-orbital flight testing is generally considered too expensive today, and there is a reluctance to incorporate substantial instrumentation for aerothermal diagnostics into existing missions since it may compromise primary mission objectives. An alternative approach to on-board flight measurements, with demonstrated success particularly in the ‘Stardust’ sample return mission, is remote observation of spectral emissions from the capsule and shock layer [8]. JAXA’s ‘Hayabusa’ sample return capsule provides a recent super-orbital reentry example through which we illustrate contributions in three areas: (1) physical simulation of super-orbital re-entry conditions in the laboratory; (2) computational simulation of such flows; and (3) remote acquisition of optical emissions from a super-orbital re entry event.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Physiological pulsatile flow in a 3D model of arterial double stenosis, using the modified Power-law blood viscosity model, is investigated by applying Large Eddy Simulation (LES) technique. The computational domain has been chosen is a simple channel with biological type stenoses. The physiological pulsation is generated at the inlet of the model using the first four harmonics of the Fourier series of the physiological pressure pulse. In LES, a top-hat spatial grid-filter is applied to the Navier-Stokes equations of motion to separate the large scale flows from the subgrid scale (SGS). The large scale flows are then resolved fully while the unresolved SGS motions are modelled using the localized dynamic model. The flow Reynolds numbers which are typical of those found in human large artery are chosen in the present work. Transitions to turbulent of the pulsatile non-Newtonian along with Newtonian flow in the post stenosis are examined through the mean velocity, wall shear stress, mean streamlines as well as turbulent kinetic energy and explained physically along with the relevant medical concerns.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Nanowires of different metal oxides (SnO2, ZnO) have been grown by evaporation-condensation process. Their chemical composition has been investigated by using XPS. The standard XPS quantification through main photoelectron peaks, modified Auger parameter and valence band spectra were examined for the accurate determination of oxidation state of metals in the nanowires. Morphological investigation has been conducted by acquiring and analyzing the SEM images. For the simulation of working conditions of sensor, the samples were annealed in ultra high vacuum (UHV) up to 500°C and XPS analysis repeated after this treatment. Finally, the nanowires of SnO 2 have were used to produce a novel gas sensor based on Pt/oxide/SiC structure and operating as Schottky diode. Copyright © 2008 John Wiley & Sons, Ltd.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this work, a range of nanomaterials have been synthesised based on metal oxyhydroxides MO(OH), where M=Al, Co, Cr, etc. Through a self-assembly hydrothermal route, metal oxyhydroxide nanomaterials with various morphologies were successfully synthesised: one dimensional boehmite (AlO(OH)) nanofibres, zero dimensional indium hydroxide (In(OH)3) nanocubes and chromium oxyhydroxide (CrO(OH)) nanoparticles, as well as two dimensional cobalt hydroxide and oxyhydroxide (Co(OH)2 & CoO(OH)) nanodiscs. In order to control the synthetic nanomaterial morphology and growth, several factors were investigated including cation concentration, temperature, hydrothermal treatment time, and pH. Metal ion doping is a promising technique to modify and control the properties of materials by intentionally introducing impurities or defects into the material. Chromium was successfully applied as a dopant for fabricating doped boehmite nanofibres. The thermal stability of the boehmite nanofibres was enhanced by chromium doping, and the photoluminescence property was introduced to the chromium doped alumina nanofibres. Doping proved to be an efficient method to modify and functionalize nanomaterials. The synthesised nanomaterials were fully characterised by X-ray diffraction (XRD), transmission electron microscopy (TEM) combined with selected area electron diffraction (SAED), scanning electron microscopy (SEM), BET specific surface area analysis, X-ray photoelectron spectroscopy (XPS) and thermo gravimetric analysis (TGA). Hot-stage Raman and infrared emission spectroscopy were applied to study the chemical reactions during dehydration and dehydroxylation. The advantage of these techniques is that the changes in molecular structure can be followed in situ and at the elevated temperatures.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The behaviour of ion channels within cardiac and neuronal cells is intrinsically stochastic in nature. When the number of channels is small this stochastic noise is large and can have an impact on the dynamics of the system which is potentially an issue when modelling small neurons and drug block in cardiac cells. While exact methods correctly capture the stochastic dynamics of a system they are computationally expensive, restricting their inclusion into tissue level models and so approximations to exact methods are often used instead. The other issue in modelling ion channel dynamics is that the transition rates are voltage dependent, adding a level of complexity as the channel dynamics are coupled to the membrane potential. By assuming that such transition rates are constant over each time step, it is possible to derive a stochastic differential equation (SDE), in the same manner as for biochemical reaction networks, that describes the stochastic dynamics of ion channels. While such a model is more computationally efficient than exact methods we show that there are analytical problems with the resulting SDE as well as issues in using current numerical schemes to solve such an equation. We therefore make two contributions: develop a different model to describe the stochastic ion channel dynamics that analytically behaves in the correct manner and also discuss numerical methods that preserve the analytical properties of the model.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Experimental and theoretical studies have shown the importance of stochastic processes in genetic regulatory networks and cellular processes. Cellular networks and genetic circuits often involve small numbers of key proteins such as transcriptional factors and signaling proteins. In recent years stochastic models have been used successfully for studying noise in biological pathways, and stochastic modelling of biological systems has become a very important research field in computational biology. One of the challenge problems in this field is the reduction of the huge computing time in stochastic simulations. Based on the system of the mitogen-activated protein kinase cascade that is activated by epidermal growth factor, this work give a parallel implementation by using OpenMP and parallelism across the simulation. Special attention is paid to the independence of the generated random numbers in parallel computing, that is a key criterion for the success of stochastic simulations. Numerical results indicate that parallel computers can be used as an efficient tool for simulating the dynamics of large-scale genetic regulatory networks and cellular processes

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper gives a modification of a class of stochastic Runge–Kutta methods proposed in a paper by Komori (2007). The slight modification can reduce the computational costs of the methods significantly.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In recent years, development of Unmanned Aerial Vehicles (UAV) has become a significant growing segment of the global aviation industry. These vehicles are developed with the intention of operating in regions where the presence of onboard human pilots is either too risky or unnecessary. Their popularity with both the military and civilian sectors have seen the use of UAVs in a diverse range of applications, from reconnaissance and surveillance tasks for the military, to civilian uses such as aid relief and monitoring tasks. Efficient energy utilisation on an UAV is essential to its functioning, often to achieve the operational goals of range, endurance and other specific mission requirements. Due to the limitations of the space available and the mass budget on the UAV, it is often a delicate balance between the onboard energy available (i.e. fuel) and achieving the operational goals. This thesis presents an investigation of methods for increasing the energy efficiency on UAVs. One method is via the development of a Mission Waypoint Optimisation (MWO) procedure for a small fixed-wing UAV, focusing on improving the onboard fuel economy. MWO deals with a pre-specified set of waypoints by modifying the given waypoints within certain limits to achieve its optimisation objectives of minimising/maximising specific parameters. A simulation model of a UAV was developed in the MATLAB Simulink environment, utilising the AeroSim Blockset and the in-built Aerosonde UAV block and its parameters. This simulation model was separately integrated with a multi-objective Evolutionary Algorithm (MOEA) optimiser and a Sequential Quadratic Programming (SQP) solver to perform single-objective and multi-objective optimisation procedures of a set of real-world waypoints in order to minimise the onboard fuel consumption. The results of both procedures show potential in reducing fuel consumption on a UAV in a ight mission. Additionally, a parallel Hybrid-Electric Propulsion System (HEPS) on a small fixedwing UAV incorporating an Ideal Operating Line (IOL) control strategy was developed. An IOL analysis of an Aerosonde engine was performed, and the most efficient (i.e. provides greatest torque output at the least fuel consumption) points of operation for this engine was determined. Simulation models of the components in a HEPS were designed and constructed in the MATLAB Simulink environment. It was demonstrated through simulation that an UAV with the current HEPS configuration was capable of achieving a fuel saving of 6.5%, compared to the ICE-only configuration. These components form the basis for the development of a complete simulation model of a Hybrid-Electric UAV (HEUAV).

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Objective: In Australia and comparable countries, case management has become the dominant process by which public mental health services provide outpatient clinical services to people with severe mental illness. There is recognition that caseload size impacts on service provision and that management of caseloads is an important dimension of overall service management. There has been little empirical investigation, however, of caseload and its management. The present study was undertaken in the context of an industrial agreement in Victoria, Australia that required services to introduce standardized approaches to caseload management. The aims of the present study were therefore to (i) investigate caseload size and approaches to caseload management in Victoria's mental health services; and (ii) determine whether caseload size and/or approach to caseload management is associated with work-related stress or case manager self-efficacy among community mental health professionals employed in Victoria's mental health services. Method: A total of 188 case managers responded to an online cross-sectional survey with both purpose-developed items investigating methods of case allocation and caseload monitoring, and standard measures of work-related stress and case manager personal efficacy. Results: The mean caseload size was 20 per full-time case manager. Both work-related stress scores and case manager personal efficacy scores were broadly comparable with those reported in previous studies. Higher caseloads were associated with higher levels of work-related stress and lower levels of case manager personal efficacy. Active monitoring of caseload was associated with lower scores for work-related stress and higher scores for case manager personal efficacy, regardless of size of caseload. Although caseloads were most frequently monitored by the case manager, there was evidence that monitoring by a supervisor was more beneficial than self-monitoring. Conclusion: Routine monitoring of caseload, especially by a workplace supervisor, may be effective in reducing work-related stress and enhancing case manager personal efficacy. Keywords: case management, caseload, stress

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Scientific visualisations such as computer-based animations and simulations are increasingly a feature of high school science instruction. Visualisations are adopted enthusiastically by teachers and embraced by students, and there is good evidence that they are popular and well received. There is limited evidence, however, of how effective they are in enabling students to learn key scientific concepts. This paper reports the results of a quantitative study conducted in Australian physics and chemistry classrooms. In general there was no statistically significant difference between teaching with and without visualisations, however there were intriguing differences around student sex and academic ability.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Computational fluid dynamics (CFD) models for ultrahigh velocity waterjets and abrasive waterjets (AWJs) are established using the Fluent 6 flow solver. Jet dynamic characteristics for the flow downstream from a very fine nozzle are then simulated under steady state, turbulent, two-phase and three-phase flow conditions. Water and particle velocities in a jet are obtained under different input and boundary conditions to provide an insight into the jet characteristics and a fundamental understanding of the kerf formation process in AWJ cutting. For the range of downstream distances considered, the results indicate that a jet is characterised by an initial rapid decay of the axial velocity at the jet centre while the cross-sectional flow evolves towards a top-hat profile downstream.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Corrosion is a common phenomenon and critical aspects of steel structural application. It affects the daily design, inspection and maintenance in structural engineering, especially for the heavy and complex industrial applications, where the steel structures are subjected to hash corrosive environments in combination of high working stress condition and often in open field and/or under high temperature production environments. In the paper, it presents the actual engineering application of advanced finite element methods in the predication of the structural integrity and robustness at a designed service life for the furnaces of alumina production, which was operated in the high temperature, corrosive environments and rotating with high working stress condition.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The Queensland University of Technology (QUT) allows the presentation of a thesis for the Degree of Doctor of Philosophy in the format of published or submitted papers, where such papers have been published, accepted or submitted during the period of candidature. This thesis is composed of seven published/submitted papers, of which one has been published, three accepted for publication and the other three are under review. This project is financially supported by an Australian Research Council (ARC) Discovery Grant with the aim of proposing strategies for the performance control of Distributed Generation (DG) system with digital estimation of power system signal parameters. Distributed Generation (DG) has been recently introduced as a new concept for the generation of power and the enhancement of conventionally produced electricity. Global warming issue calls for renewable energy resources in electricity production. Distributed generation based on solar energy (photovoltaic and solar thermal), wind, biomass, mini-hydro along with use of fuel cell and micro turbine will gain substantial momentum in the near future. Technically, DG can be a viable solution for the issue of the integration of renewable or non-conventional energy resources. Basically, DG sources can be connected to local power system through power electronic devices, i.e. inverters or ac-ac converters. The interconnection of DG systems to power system as a compensator or a power source with high quality performance is the main aim of this study. Source and load unbalance, load non-linearity, interharmonic distortion, supply voltage distortion, distortion at the point of common coupling in weak source cases, source current power factor, and synchronism of generated currents or voltages are the issues of concern. The interconnection of DG sources shall be carried out by using power electronics switching devices that inject high frequency components rather than the desired current. Also, noise and harmonic distortions can impact the performance of the control strategies. To be able to mitigate the negative effect of high frequency and harmonic as well as noise distortion to achieve satisfactory performance of DG systems, new methods of signal parameter estimation have been proposed in this thesis. These methods are based on processing the digital samples of power system signals. Thus, proposing advanced techniques for the digital estimation of signal parameters and methods for the generation of DG reference currents using the estimates provided is the targeted scope of this thesis. An introduction to this research – including a description of the research problem, the literature review and an account of the research progress linking the research papers – is presented in Chapter 1. One of the main parameters of a power system signal is its frequency. Phasor Measurement (PM) technique is one of the renowned and advanced techniques used for the estimation of power system frequency. Chapter 2 focuses on an in-depth analysis conducted on the PM technique to reveal its strengths and drawbacks. The analysis will be followed by a new technique proposed to enhance the speed of the PM technique while the input signal is free of even-order harmonics. The other techniques proposed in this thesis as the novel ones will be compared with the PM technique comprehensively studied in Chapter 2. An algorithm based on the concept of Kalman filtering is proposed in Chapter 3. The algorithm is intended to estimate signal parameters like amplitude, frequency and phase angle in the online mode. The Kalman filter is modified to operate on the output signal of a Finite Impulse Response (FIR) filter designed by a plain summation. The frequency estimation unit is independent from the Kalman filter and uses the samples refined by the FIR filter. The frequency estimated is given to the Kalman filter to be used in building the transition matrices. The initial settings for the modified Kalman filter are obtained through a trial and error exercise. Another algorithm again based on the concept of Kalman filtering is proposed in Chapter 4 for the estimation of signal parameters. The Kalman filter is also modified to operate on the output signal of the same FIR filter explained above. Nevertheless, the frequency estimation unit, unlike the one proposed in Chapter 3, is not segregated and it interacts with the Kalman filter. The frequency estimated is given to the Kalman filter and other parameters such as the amplitudes and phase angles estimated by the Kalman filter is taken to the frequency estimation unit. Chapter 5 proposes another algorithm based on the concept of Kalman filtering. This time, the state parameters are obtained through matrix arrangements where the noise level is reduced on the sample vector. The purified state vector is used to obtain a new measurement vector for a basic Kalman filter applied. The Kalman filter used has similar structure to a basic Kalman filter except the initial settings are computed through an extensive math-work with regards to the matrix arrangement utilized. Chapter 6 proposes another algorithm based on the concept of Kalman filtering similar to that of Chapter 3. However, this time the initial settings required for the better performance of the modified Kalman filter are calculated instead of being guessed by trial and error exercises. The simulations results for the parameters of signal estimated are enhanced due to the correct settings applied. Moreover, an enhanced Least Error Square (LES) technique is proposed to take on the estimation when a critical transient is detected in the input signal. In fact, some large, sudden changes in the parameters of the signal at these critical transients are not very well tracked by Kalman filtering. However, the proposed LES technique is found to be much faster in tracking these changes. Therefore, an appropriate combination of the LES and modified Kalman filtering is proposed in Chapter 6. Also, this time the ability of the proposed algorithm is verified on the real data obtained from a prototype test object. Chapter 7 proposes the other algorithm based on the concept of Kalman filtering similar to those of Chapter 3 and 6. However, this time an optimal digital filter is designed instead of the simple summation FIR filter. New initial settings for the modified Kalman filter are calculated based on the coefficients of the digital filter applied. Also, the ability of the proposed algorithm is verified on the real data obtained from a prototype test object. Chapter 8 uses the estimation algorithm proposed in Chapter 7 for the interconnection scheme of a DG to power network. Robust estimates of the signal amplitudes and phase angles obtained by the estimation approach are used in the reference generation of the compensation scheme. Several simulation tests provided in this chapter show that the proposed scheme can very well handle the source and load unbalance, load non-linearity, interharmonic distortion, supply voltage distortion, and synchronism of generated currents or voltages. The purposed compensation scheme also prevents distortion in voltage at the point of common coupling in weak source cases, balances the source currents, and makes the supply side power factor a desired value.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

As universities worldwide begin to appreciate the value of authentic learning experiences, so they struggle with methods of assessing the outcomes from such experiences. This chapter describes the application of an assessment matrix developed by Queensland University of Technology(QUT) in Australia, to the assessment requirements and practices relating to work integrated learning at the University of Surrey in the UK. Despite the very different institutional contexts and independent way in which the assessment regimes have developed, it was found that the values and outcomes being assessed and the methods used to assess them were similar. The most important feature of assessing work integrated learning experiences is fitness for purpose, hence the learning objectives and assessment of outcomes for a WIL experience must be explicitly aligned to this objective.As universities worldwide begin to appreciate the value of authentic learning experiences, so they struggle with methods of assessing the outcomes from such experiences. This chapter describes the application of an assessment matrix developed by Queensland University of Technology (QUT) in Australia, to the assessment requirements and practices relating to work integrated learning at the University of Surrey in the UK. Despite the very different institutional contexts and independent way in which the assessment regimes have developed, it was found that the values and outcomes being assessed and the methods used to assess them were similar. The most important feature of assessing work integrated learning experiences is fitness for purpose, hence the learning objectives and assessment of outcomes for a WIL experience must be explicitly aligned to this objective.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The research objectives of this thesis were to contribute to Bayesian statistical methodology by contributing to risk assessment statistical methodology, and to spatial and spatio-temporal methodology, by modelling error structures using complex hierarchical models. Specifically, I hoped to consider two applied areas, and use these applications as a springboard for developing new statistical methods as well as undertaking analyses which might give answers to particular applied questions. Thus, this thesis considers a series of models, firstly in the context of risk assessments for recycled water, and secondly in the context of water usage by crops. The research objective was to model error structures using hierarchical models in two problems, namely risk assessment analyses for wastewater, and secondly, in a four dimensional dataset, assessing differences between cropping systems over time and over three spatial dimensions. The aim was to use the simplicity and insight afforded by Bayesian networks to develop appropriate models for risk scenarios, and again to use Bayesian hierarchical models to explore the necessarily complex modelling of four dimensional agricultural data. The specific objectives of the research were to develop a method for the calculation of credible intervals for the point estimates of Bayesian networks; to develop a model structure to incorporate all the experimental uncertainty associated with various constants thereby allowing the calculation of more credible credible intervals for a risk assessment; to model a single day’s data from the agricultural dataset which satisfactorily captured the complexities of the data; to build a model for several days’ data, in order to consider how the full data might be modelled; and finally to build a model for the full four dimensional dataset and to consider the timevarying nature of the contrast of interest, having satisfactorily accounted for possible spatial and temporal autocorrelations. This work forms five papers, two of which have been published, with two submitted, and the final paper still in draft. The first two objectives were met by recasting the risk assessments as directed, acyclic graphs (DAGs). In the first case, we elicited uncertainty for the conditional probabilities needed by the Bayesian net, incorporated these into a corresponding DAG, and used Markov chain Monte Carlo (MCMC) to find credible intervals, for all the scenarios and outcomes of interest. In the second case, we incorporated the experimental data underlying the risk assessment constants into the DAG, and also treated some of that data as needing to be modelled as an ‘errors-invariables’ problem [Fuller, 1987]. This illustrated a simple method for the incorporation of experimental error into risk assessments. In considering one day of the three-dimensional agricultural data, it became clear that geostatistical models or conditional autoregressive (CAR) models over the three dimensions were not the best way to approach the data. Instead CAR models are used with neighbours only in the same depth layer. This gave flexibility to the model, allowing both the spatially structured and non-structured variances to differ at all depths. We call this model the CAR layered model. Given the experimental design, the fixed part of the model could have been modelled as a set of means by treatment and by depth, but doing so allows little insight into how the treatment effects vary with depth. Hence, a number of essentially non-parametric approaches were taken to see the effects of depth on treatment, with the model of choice incorporating an errors-in-variables approach for depth in addition to a non-parametric smooth. The statistical contribution here was the introduction of the CAR layered model, the applied contribution the analysis of moisture over depth and estimation of the contrast of interest together with its credible intervals. These models were fitted using WinBUGS [Lunn et al., 2000]. The work in the fifth paper deals with the fact that with large datasets, the use of WinBUGS becomes more problematic because of its highly correlated term by term updating. In this work, we introduce a Gibbs sampler with block updating for the CAR layered model. The Gibbs sampler was implemented by Chris Strickland using pyMCMC [Strickland, 2010]. This framework is then used to consider five days data, and we show that moisture in the soil for all the various treatments reaches levels particular to each treatment at a depth of 200 cm and thereafter stays constant, albeit with increasing variances with depth. In an analysis across three spatial dimensions and across time, there are many interactions of time and the spatial dimensions to be considered. Hence, we chose to use a daily model and to repeat the analysis at all time points, effectively creating an interaction model of time by the daily model. Such an approach allows great flexibility. However, this approach does not allow insight into the way in which the parameter of interest varies over time. Hence, a two-stage approach was also used, with estimates from the first-stage being analysed as a set of time series. We see this spatio-temporal interaction model as being a useful approach to data measured across three spatial dimensions and time, since it does not assume additivity of the random spatial or temporal effects.