644 resultados para Simplified Models.
em Queensland University of Technology - ePrints Archive
Resumo:
Even though titanium dioxide photocatalysis has been promoted as a leading green technology for water purification, many issues have hindered its application on a large commercial scale. For the materials scientist the main issues have centred the synthesis of more efficient materials and the investigation of degradation mechanisms; whereas for the engineers the main issues have been the development of appropriate models and the evaluation of intrinsic kinetics parameters that allow the scale up or re-design of efficient large-scale photocatalytic reactors. In order to obtain intrinsic kinetics parameters the reaction must be analysed and modelled considering the influence of the radiation field, pollutant concentrations and fluid dynamics. In this way, the obtained kinetic parameters are independent of the reactor size and configuration and can be subsequently used for scale-up purposes or for the development of entirely new reactor designs. This work investigates the intrinsic kinetics of phenol degradation over titania film due to the practicality of a fixed film configuration over a slurry. A flat plate reactor was designed in order to be able to control reaction parameters that include the UV irradiance, flow rates, pollutant concentration and temperature. Particular attention was paid to the investigation of the radiation field over the reactive surface and to the issue of mass transfer limited reactions. The ability of different emission models to describe the radiation field was investigated and compared to actinometric measurements. The RAD-LSI model was found to give the best predictions over the conditions tested. Mass transfer issues often limit fixed film reactors. The influence of this phenomenon was investigated with specifically planned sets of benzoic acid experiments and with the adoption of the stagnant film model. The phenol mass transfer coefficient in the system was calculated to be km,phenol=8.5815x10-7Re0.65(ms-1). The data obtained from a wide range of experimental conditions, together with an appropriate model of the system, has enabled determination of intrinsic kinetic parameters. The experiments were performed in four different irradiation levels (70.7, 57.9, 37.1 and 20.4 W m-2) and combined with three different initial phenol concentrations (20, 40 and 80 ppm) to give a wide range of final pollutant conversions (from 22% to 85%). The simple model adopted was able to fit the wide range of conditions with only four kinetic parameters; two reaction rate constants (one for phenol and one for the family of intermediates) and their corresponding adsorption constants. The intrinsic kinetic parameters values were defined as kph = 0.5226 mmol m-1 s-1 W-1, kI = 0.120 mmol m-1 s-1 W-1, Kph = 8.5 x 10-4 m3 mmol-1 and KI = 2.2 x 10-3 m3 mmol-1. The flat plate reactor allowed the investigation of the reaction under two different light configurations; liquid and substrate side illumination. The latter of particular interest for real world applications where light absorption due to turbidity and pollutants contained in the water stream to be treated could represent a significant issue. The two light configurations allowed the investigation of the effects of film thickness and the determination of the catalyst optimal thickness. The experimental investigation confirmed the predictions of a porous medium model developed to investigate the influence of diffusion, advection and photocatalytic phenomena inside the porous titania film, with the optimal thickness value individuated at 5 ìm. The model used the intrinsic kinetic parameters obtained from the flat plate reactor to predict the influence of thickness and transport phenomena on the final observed phenol conversion without using any correction factor; the excellent match between predictions and experimental results provided further proof of the quality of the parameters obtained with the proposed method.
Resumo:
Demands for delivering high instantaneous power in a compressed form (pulse shape) have widely increased during recent decades. The flexible shapes with variable pulse specifications offered by pulsed power have made it a practical and effective supply method for an extensive range of applications. In particular, the release of basic subatomic particles (i.e. electron, proton and neutron) in an atom (ionization process) and the synthesizing of molecules to form ions or other molecules are among those reactions that necessitate large amount of instantaneous power. In addition to the decomposition process, there have recently been requests for pulsed power in other areas such as in the combination of molecules (i.e. fusion, material joining), gessoes radiations (i.e. electron beams, laser, and radar), explosions (i.e. concrete recycling), wastewater, exhausted gas, and material surface treatments. These pulses are widely employed in the silent discharge process in all types of materials (including gas, fluid and solid); in some cases, to form the plasma and consequently accelerate the associated process. Due to this fast growing demand for pulsed power in industrial and environmental applications, the exigency of having more efficient and flexible pulse modulators is now receiving greater consideration. Sensitive applications, such as plasma fusion and laser guns also require more precisely produced repetitive pulses with a higher quality. Many research studies are being conducted in different areas that need a flexible pulse modulator to vary pulse features to investigate the influence of these variations on the application. In addition, there is the need to prevent the waste of a considerable amount of energy caused by the arc phenomena that frequently occur after the plasma process. The control over power flow during the supply process is a critical skill that enables the pulse supply to halt the supply process at any stage. Different pulse modulators which utilise different accumulation techniques including Marx Generators (MG), Magnetic Pulse Compressors (MPC), Pulse Forming Networks (PFN) and Multistage Blumlein Lines (MBL) are currently employed to supply a wide range of applications. Gas/Magnetic switching technologies (such as spark gap and hydrogen thyratron) have conventionally been used as switching devices in pulse modulator structures because of their high voltage ratings and considerably low rising times. However, they also suffer from serious drawbacks such as, their low efficiency, reliability and repetition rate, and also their short life span. Being bulky, heavy and expensive are the other disadvantages associated with these devices. Recently developed solid-state switching technology is an appropriate substitution for these switching devices due to the benefits they bring to the pulse supplies. Besides being compact, efficient, reasonable and reliable, and having a long life span, their high frequency switching skill allows repetitive operation of pulsed power supply. The main concerns in using solid-state transistors are the voltage rating and the rising time of available switches that, in some cases, cannot satisfy the application’s requirements. However, there are several power electronics configurations and techniques that make solid-state utilisation feasible for high voltage pulse generation. Therefore, the design and development of novel methods and topologies with higher efficiency and flexibility for pulsed power generators have been considered as the main scope of this research work. This aim is pursued through several innovative proposals that can be classified under the following two principal objectives. • To innovate and develop novel solid-state based topologies for pulsed power generation • To improve available technologies that have the potential to accommodate solid-state technology by revising, reconfiguring and adjusting their structure and control algorithms. The quest to distinguish novel topologies for a proper pulsed power production was begun with a deep and through review of conventional pulse generators and useful power electronics topologies. As a result of this study, it appears that efficiency and flexibility are the most significant demands of plasma applications that have not been met by state-of-the-art methods. Many solid-state based configurations were considered and simulated in order to evaluate their potential to be utilised in the pulsed power area. Parts of this literature review are documented in Chapter 1 of this thesis. Current source topologies demonstrate valuable advantages in supplying the loads with capacitive characteristics such as plasma applications. To investigate the influence of switching transients associated with solid-state devices on rise time of pulses, simulation based studies have been undertaken. A variable current source is considered to pump different current levels to a capacitive load, and it was evident that dissimilar dv/dts are produced at the output. Thereby, transient effects on pulse rising time are denied regarding the evidence acquired from this examination. A detailed report of this study is given in Chapter 6 of this thesis. This study inspired the design of a solid-state based topology that take advantage of both current and voltage sources. A series of switch-resistor-capacitor units at the output splits the produced voltage to lower levels, so it can be shared by the switches. A smart but complicated switching strategy is also designed to discharge the residual energy after each supply cycle. To prevent reverse power flow and to reduce the complexity of the control algorithm in this system, the resistors in common paths of units are substituted with diode rectifiers (switch-diode-capacitor). This modification not only gives the feasibility of stopping the load supply process to the supplier at any stage (and consequently saving energy), but also enables the converter to operate in a two-stroke mode with asymmetrical capacitors. The components’ determination and exchanging energy calculations are accomplished with respect to application specifications and demands. Both topologies were simply modelled and simulation studies have been carried out with the simplified models. Experimental assessments were also executed on implemented hardware and the approaches verified the initial analysis. Reports on details of both converters are thoroughly discussed in Chapters 2 and 3 of the thesis. Conventional MGs have been recently modified to use solid-state transistors (i.e. Insulated gate bipolar transistors) instead of magnetic/gas switching devices. Resistive insulators previously used in their structures are substituted by diode rectifiers to adjust MGs for a proper voltage sharing. However, despite utilizing solid-state technology in MGs configurations, further design and control amendments can still be made to achieve an improved performance with fewer components. Considering a number of charging techniques, resonant phenomenon is adopted in a proposal to charge the capacitors. In addition to charging the capacitors at twice the input voltage, triggering switches at the moment at which the conducted current through switches is zero significantly reduces the switching losses. Another configuration is also introduced in this research for Marx topology based on commutation circuits that use a current source to charge the capacitors. According to this design, diode-capacitor units, each including two Marx stages, are connected in cascade through solid-state devices and aggregate the voltages across the capacitors to produce a high voltage pulse. The polarity of voltage across one capacitor in each unit is reversed in an intermediate mode by connecting the commutation circuit to the capacitor. The insulation of input side from load side is provided in this topology by disconnecting the load from the current source during the supply process. Furthermore, the number of required fast switching devices in both designs is reduced to half of the number used in a conventional MG; they are replaced with slower switches (such as Thyristors) that need simpler driving modules. In addition, the contributing switches in discharging paths are decreased to half; this decrease leads to a reduction in conduction losses. Associated models are simulated, and hardware tests are performed to verify the validity of proposed topologies. Chapters 4, 5 and 7 of the thesis present all relevant analysis and approaches according to these topologies.
Resumo:
Image representations derived from simplified models of the primary visual cortex (V1), such as HOG and SIFT, elicit good performance in a myriad of visual classification tasks including object recognition/detection, pedestrian detection and facial expression classification. A central question in the vision, learning and neuroscience communities regards why these architectures perform so well. In this paper, we offer a unique perspective to this question by subsuming the role of V1-inspired features directly within a linear support vector machine (SVM). We demonstrate that a specific class of such features in conjunction with a linear SVM can be reinterpreted as inducing a weighted margin on the Kronecker basis expansion of an image. This new viewpoint on the role of V1-inspired features allows us to answer fundamental questions on the uniqueness and redundancies of these features, and offer substantial improvements in terms of computational and storage efficiency.
Resumo:
Phylogenetic inference from sequences can be misled by both sampling (stochastic) error and systematic error (nonhistorical signals where reality differs from our simplified models). A recent study of eight yeast species using 106 concatenated genes from complete genomes showed that even small internal edges of a tree received 100% bootstrap support. This effective negation of stochastic error from large data sets is important, but longer sequences exacerbate the potential for biases (systematic error) to be positively misleading. Indeed, when we analyzed the same data set using minimum evolution optimality criteria, an alternative tree received 100% bootstrap support. We identified a compositional bias as responsible for this inconsistency and showed that it is reduced effectively by coding the nucleotides as purines and pyrimidines (RY-coding), reinforcing the original tree. Thus, a comprehensive exploration of potential systematic biases is still required, even though genome-scale data sets greatly reduce sampling error.
Resumo:
This paper is aimed at investigating the effect of web openings on the plastic bending behaviour and section moment capacity of a new cold-formed steel beam known as LiteSteel beam (LSB) using numerical modelling. Different LSB sections with varying circular hole diameter and spacing were considered. A simplified but appropriate numerical modelling technique was developed for the modelling of monosymmetric sections such as LSBs subject to bending, and was used to simulate a series of section moment capacity tests of LSB flexural members with web openings. The buckling and ultimate strength behaviour was investigated in detail and the modeling technique was further improved through a comparison of numerical and experimental results. This paper describes the simplified finite element modeling technique used in this study that includes all the significant behavioural effects affecting the plastic bending behaviour and section moment capacity of LSB sections with web holes. Numerical and test results and associated findings are also presented.
Resumo:
In this paper, the problems of three carrier phase ambiguity resolution (TCAR) and position estimation (PE) are generalized as real time GNSS data processing problems for a continuously observing network on large scale. In order to describe these problems, a general linear equation system is presented to uniform various geometry-free, geometry-based and geometry-constrained TCAR models, along with state transition questions between observation times. With this general formulation, generalized TCAR solutions are given to cover different real time GNSS data processing scenarios, and various simplified integer solutions, such as geometry-free rounding and geometry-based LAMBDA solutions with single and multiple-epoch measurements. In fact, various ambiguity resolution (AR) solutions differ in the floating ambiguity estimation and integer ambiguity search processes, but their theoretical equivalence remains under the same observational systems models and statistical assumptions. TCAR performance benefits as outlined from the data analyses in some recent literatures are reviewed, showing profound implications for the future GNSS development from both technology and application perspectives.
Resumo:
This thesis addresses computational challenges arising from Bayesian analysis of complex real-world problems. Many of the models and algorithms designed for such analysis are ‘hybrid’ in nature, in that they are a composition of components for which their individual properties may be easily described but the performance of the model or algorithm as a whole is less well understood. The aim of this research project is to after a better understanding of the performance of hybrid models and algorithms. The goal of this thesis is to analyse the computational aspects of hybrid models and hybrid algorithms in the Bayesian context. The first objective of the research focuses on computational aspects of hybrid models, notably a continuous finite mixture of t-distributions. In the mixture model, an inference of interest is the number of components, as this may relate to both the quality of model fit to data and the computational workload. The analysis of t-mixtures using Markov chain Monte Carlo (MCMC) is described and the model is compared to the Normal case based on the goodness of fit. Through simulation studies, it is demonstrated that the t-mixture model can be more flexible and more parsimonious in terms of number of components, particularly for skewed and heavytailed data. The study also reveals important computational issues associated with the use of t-mixtures, which have not been adequately considered in the literature. The second objective of the research focuses on computational aspects of hybrid algorithms for Bayesian analysis. Two approaches will be considered: a formal comparison of the performance of a range of hybrid algorithms and a theoretical investigation of the performance of one of these algorithms in high dimensions. For the first approach, the delayed rejection algorithm, the pinball sampler, the Metropolis adjusted Langevin algorithm, and the hybrid version of the population Monte Carlo (PMC) algorithm are selected as a set of examples of hybrid algorithms. Statistical literature shows how statistical efficiency is often the only criteria for an efficient algorithm. In this thesis the algorithms are also considered and compared from a more practical perspective. This extends to the study of how individual algorithms contribute to the overall efficiency of hybrid algorithms, and highlights weaknesses that may be introduced by the combination process of these components in a single algorithm. The second approach to considering computational aspects of hybrid algorithms involves an investigation of the performance of the PMC in high dimensions. It is well known that as a model becomes more complex, computation may become increasingly difficult in real time. In particular the importance sampling based algorithms, including the PMC, are known to be unstable in high dimensions. This thesis examines the PMC algorithm in a simplified setting, a single step of the general sampling, and explores a fundamental problem that occurs in applying importance sampling to a high-dimensional problem. The precision of the computed estimate from the simplified setting is measured by the asymptotic variance of the estimate under conditions on the importance function. Additionally, the exponential growth of the asymptotic variance with the dimension is demonstrated and we illustrates that the optimal covariance matrix for the importance function can be estimated in a special case.
Resumo:
In the exclusion-process literature, mean-field models are often derived by assuming that the occupancy status of lattice sites is independent. Although this assumption is questionable, it is the foundation of many mean-field models. In this work we develop methods to relax the independence assumption for a range of discrete exclusion process-based mechanisms motivated by applications from cell biology. Previous investigations that focussed on relaxing the independence assumption have been limited to studying initially-uniform populations and ignored any spatial variations. By ignoring spatial variations these previous studies were greatly simplified due to translational invariance of the lattice. These previous corrected mean-field models could not be applied to many important problems in cell biology such as invasion waves of cells that are characterised by moving fronts. Here we propose generalised methods that relax the independence assumption for spatially inhomogeneous problems, leading to corrected mean-field descriptions of a range of exclusion process-based models that incorporate (i) unbiased motility, (ii) biased motility, and (iii) unbiased motility with agent birth and death processes. The corrected mean-field models derived here are applicable to spatially variable processes including invasion wave type problems. We show that there can be large deviations between simulation data and traditional mean-field models based on invoking the independence assumption. Furthermore, we show that the corrected mean-field models give an improved match to the simulation data in all cases considered.
Resumo:
Recent fire research into the behaviour of light gauge steel frame (LSF) wall systems has devel-oped fire design rules based on Australian and European cold-formed steel design standards, AS/NZS 4600 and Eurocode 3 Part 1.3. However, these design rules are complex since the LSF wall studs are subjected to non-uniform elevated temperature distributions when the walls are exposed to fire from one side. Therefore this paper proposes an alternative design method for routine predictions of fire resistance rating of LSF walls. In this method, suitable equations are recommended first to predict the idealised stud time-temperature pro-files of eight different LSF wall configurations subject to standard fire conditions based on full scale fire test results. A new set of equations was then proposed to find the critical hot flange (failure) temperature for a giv-en load ratio for the same LSF wall configurations with varying steel grades and thickness. These equations were developed based on detailed finite element analyses that predicted the axial compression capacities and failure times of LSF wall studs subject to non-uniform temperature distributions with varying steel grades and thicknesses. This paper proposes a simple design method in which the two sets of equations developed for time-temperature profiles and critical hot flange temperatures are used to find the failure times of LSF walls. The proposed method was verified by comparing its predictions with the results from full scale fire tests and finite element analyses. This paper presents the details of this study including the finite element models of LSF wall studs, the results from relevant fire tests and finite element analyses, and the proposed equations.
Resumo:
Many model-based investigation techniques, such as sensitivity analysis, optimization, and statistical inference, require a large number of model evaluations to be performed at different input and/or parameter values. This limits the application of these techniques to models that can be implemented in computationally efficient computer codes. Emulators, by providing efficient interpolation between outputs of deterministic simulation models, can considerably extend the field of applicability of such computationally demanding techniques. So far, the dominant techniques for developing emulators have been priors in the form of Gaussian stochastic processes (GASP) that were conditioned with a design data set of inputs and corresponding model outputs. In the context of dynamic models, this approach has two essential disadvantages: (i) these emulators do not consider our knowledge of the structure of the model, and (ii) they run into numerical difficulties if there are a large number of closely spaced input points as is often the case in the time dimension of dynamic models. To address both of these problems, a new concept of developing emulators for dynamic models is proposed. This concept is based on a prior that combines a simplified linear state space model of the temporal evolution of the dynamic model with Gaussian stochastic processes for the innovation terms as functions of model parameters and/or inputs. These innovation terms are intended to correct the error of the linear model at each output step. Conditioning this prior to the design data set is done by Kalman smoothing. This leads to an efficient emulator that, due to the consideration of our knowledge about dominant mechanisms built into the simulation model, can be expected to outperform purely statistical emulators at least in cases in which the design data set is small. The feasibility and potential difficulties of the proposed approach are demonstrated by the application to a simple hydrological model.
Resumo:
The acceptance of broadband ultrasound attenuation (BUA) for the assessment of osteoporosis suffers from a limited understanding of both ultrasound wave propagation through cancellous bone and its exact dependence upon the material and structural properties. It has recently been proposed that ultrasound wave propagation in cancellous bone may be described by a concept of parallel sonic rays; the transit time of each ray defined by the proportion of bone and marrow propagated. A Transit Time Spectrum (TTS) describes the proportion of sonic rays having a particular transit time, effectively describing the lateral inhomogeneity of transit times over the surface aperture of the receive ultrasound transducer. The aim of this study was to test the hypothesis that the solid volume fraction (SVF) of simplified bone:marrow replica models may be reliably estimated from the corresponding ultrasound transit time spectrum. Transit time spectra were derived via digital deconvolution of the experimentally measured input and output ultrasonic signals, and compared to predicted TTS based on the parallel sonic ray concept, demonstrating agreement in both position and amplitude of spectral peaks. Solid volume fraction was calculated from the TTS; agreement between true (geometric calculation) with predicted (computer simulation) and experimentally-derived values were R2=99.9% and R2=97.3% respectively. It is therefore envisaged that ultrasound transit time spectroscopy (UTTS) offers the potential to reliably estimate bone mineral density and hence the established T-score parameter for clinical osteoporosis assessment.
Resumo:
Diffusive transport is a universal phenomenon, throughout both biological and physical sciences, and models of diffusion are routinely used to interrogate diffusion-driven processes. However, most models neglect to take into account the role of volume exclusion, which can significantly alter diffusive transport, particularly within biological systems where the diffusing particles might occupy a significant fraction of the available space. In this work we use a random walk approach to provide a means to reconcile models that incorporate crowding effects on different spatial scales. Our work demonstrates that coarse-grained models incorporating simplified descriptions of excluded volume can be used in many circumstances, but that care must be taken in pushing the coarse-graining process too far.