778 resultados para NUMERICAL EVALUATION
Resumo:
Protecting slow sand filters (SSFs) from high-turbidity waters by pretreatment using pebble matrix filtration (PMF) has previously been studied in the laboratory at University College London, followed by pilot field trials in Papua New Guinea and Serbia. The first full-scale PMF plant was completed at a water-treatment plant in Sri Lanka in 2008, and during its construction, problems were encountered in sourcing the required size of pebbles and sand as filter media. Because sourcing of uniform-sized pebbles may be problematic in many countries, the performance of alternative media has been investigated for the sustainability of the PMF system. Hand-formed clay balls made at a 100-yearold brick factory in the United Kingdom appear to have satisfied the role of pebbles, and a laboratory filter column was operated by using these clay balls together with recycled crushed glass as an alternative to sand media in the PMF. Results showed that in countries where uniform-sized pebbles are difficult to obtain, clay balls are an effective and feasible alternative to natural pebbles. Also, recycled crushed glass performed as well as or better than silica sand as an alternative fine media in the clarification process, although cleaning by drainage was more effective with sand media. In the tested filtration velocity range of ð0:72–1:33Þ m=h and inlet turbidity range of (78–589) NTU, both sand and glass produced above 95% removal efficiencies. The head loss development during clogging was about 30% higher in sand than in glass media.
Resumo:
Maximum-likelihood estimates of the parameters of stochastic differential equations are consistent and asymptotically efficient, but unfortunately difficult to obtain if a closed-form expression for the transitional probability density function of the process is not available. As a result, a large number of competing estimation procedures have been proposed. This article provides a critical evaluation of the various estimation techniques. Special attention is given to the ease of implementation and comparative performance of the procedures when estimating the parameters of the Cox–Ingersoll–Ross and Ornstein–Uhlenbeck equations respectively.
Resumo:
Fire safety design of building structures has received greater attention in recent times due to continuing loss of properties and lives during fires. However, fire performance of light gauge cold-formed steel structures is not well understood despite its increased usage in buildings. Cold-formed steel compression members are susceptible to various buckling modes such as local and distortional buckling and their ultimate strength behaviour is governed by these buckling modes. Therefore a research project based on experimental and numerical studies was undertaken to investigate the distortional buckling behaviour of light gauge cold-formed steel compression members under simulated fire conditions. Lipped channel sections with and without additional lips were selected with three thicknesses of 0.6, 0.8, and 0.95 mm and both low and high strength steels (G250 and G550 steels). More than 150 compression tests were undertaken first at ambient and elevated temperatures. Finite element models of the tested compression members were then developed by including the degradation of mechanical properties with increasing temperatures. Comparison of finite element analysis and experimental results showed that the developed finite element models were capable of simulating the distortional buckling and strength behaviour at ambient and elevated temperatures up to 800 °C. The validated model was used to determine the effects of mechanical properties, geometric imperfections and residual stresses on the distortional buckling behaviour and strength of cold-formed steel columns. This paper presents the details of the numerical study and the results. It demonstrated the importance of using accurate mechanical properties at elevated temperatures in order to obtain reliable strength characteristics of cold-formed steel columns under fire conditions.
Resumo:
In this work a novel hybrid approach is presented that uses a combination of both time domain and frequency domain solution strategies to predict the power distribution within a lossy medium loaded within a waveguide. The problem of determining the electromagnetic fields evolving within the waveguide and the lossy medium is decoupled into two components, one for computing the fields in the waveguide including a coarse representation of the medium (the exterior problem) and one for a detailed resolution of the lossy medium (the interior problem). A previously documented cell-centred Maxwell’s equations numerical solver can be used to resolve the exterior problem accurately in the time domain. Thereafter the discrete Fourier transform can be applied to the computed field data around the interface of the medium to estimate the frequency domain boundary condition in-formation that is needed for closure of the interior problem. Since only the electric fields are required to compute the power distribution generated within the lossy medium, the interior problem can be resolved efficiently using the Helmholtz equation. A consistent cell-centred finite-volume method is then used to discretise this equation on a fine mesh and the underlying large, sparse, complex matrix system is solved for the required electric field using the iterative Krylov subspace based GMRES iterative solver. It will be shown that the hybrid solution methodology works well when a single frequency is considered in the evaluation of the Helmholtz equation in a single mode waveguide. A restriction of the scheme is that the material needs to be sufficiently lossy, so that any penetrating waves in the material are absorbed.
Resumo:
Many traffic situations require drivers to cross or merge into a stream having higher priority. Gap acceptance theory enables us to model such processes to analyse traffic operation. This discussion demonstrated that numerical search fine tuned by statistical analysis can be used to determine the most likely critical gap for a sample of drivers, based on their largest rejected gap and accepted gap. This method shares some common features with the Maximum Likelihood Estimation technique (Troutbeck 1992) but lends itself well to contemporary analysis tools such as spreadsheet and is particularly analytically transparent. This method is considered not to bias estimation of critical gap due to very small rejected gaps or very large rejected gaps. However, it requires a sufficiently large sample that there is reasonable representation of largest rejected gap/accepted gap pairs within a fairly narrow highest likelihood search band.
Resumo:
Isoindoline nitroxides are potentially useful probes for viable biological systems, exhibiting low cytotoxicity, moderate rates of biological reduction and favorable Electron Paramagnetic Resonance (EPR) characteristics. We have evaluated the anionic (5-carboxy-1,1,3,3-tetramethylisoindolin-2-yloxyl; CTMIO), cationic (5-(N,N,N-trimethylammonio)-1,1,3,3-tetramethylisoindolin-2-yloxyl iodide, QATMIO) and neutral (1,1,3,3-tetramethylisoindolin-2-yloxyl; TMIO) nitroxides and their isotopically labeled analogs ((2)H(12)- and/or (2)H(12)-(15)N-labeled) as potential EPR oximetry probes. An active ester analogue of CTMIO, designed to localize intracellularly, and the azaphenalene nitroxide 1,1,3,3-tetramethyl-2,3-dihydro-2-azaphenalen-2-yloxyl (TMAO) were also studied. While the EPR spectra of the unlabeled nitroxides exhibit high sensitivity to O(2) concentration, deuteration resulted in a loss of superhyperfine features and a subsequent reduction in O(2) sensitivity. Labeling the nitroxides with (15)N increased the signal intensity and this may be useful in decreasing the detection limits for in vivo measurements. The active ester nitroxide showed approximately 6% intracellular localization and low cytotoxicity. The EPR spectra of TMAO nitroxide indicated an increased rigidity in the nitroxide ring, due to dibenzo-annulation.
Resumo:
Operation in urban environments creates unique challenges for research in autonomous ground vehicles. Due to the presence of tall trees and buildings in close proximity to traversable areas, GPS outage is likely to be frequent and physical hazards pose real threats to autonomous systems. In this paper, we describe a novel autonomous platform developed by the Sydney-Berkeley Driving Team for entry into the 2007 DARPA Urban Challenge competition. We report empirical results analyzing the performance of the vehicle while navigating a 560-meter test loop multiple times in an actual urban setting with severe GPS outage. We show that our system is robust against failure of global position estimates and can reliably traverse standard two-lane road networks using vision for localization. Finally, we discuss ongoing efforts in fusing vision data with other sensing modalities.