691 resultados para analytical modelling
em Queensland University of Technology - ePrints Archive
Resumo:
The drying of fruit and vegetables is a subject of great importance. Dried fruit and vegetables have gained commercial importance, and their growth on a commercial scale has become an important sector of the agricultural industry. However, food drying is one of the most energy intensive processes of the major industrial process and accounts for up to 15 % of all industrial energy usage. Due to increasingly high electricity prices and environmental concern, a dryer using traditional energy sources is not a feasible option anymore. Therefore, an alternative/renewable energy source is needed. In this regard, an integrated solar drying system that includes highly efficient double-pass counter flow v-groove solar collector, conical-shaped rock-bed thermal storage, auxiliary heater, the centrifugal fan and the drying chamber has been designed and constructed. Mathematical model for all the individual components as well as an integrated model combining all components of the drying system has been developed. Mathematical equations were solved using MATLAB program. This paper presents the analytical model and key finding of the simulation.
Resumo:
Masonry under compression is affected by the properties of its constituents and their interfaces. In spite of extensive investigations of the behaviour of masonry under compression, the information in the literature cannot be regarded as comprehensive due to ongoing inventions of new generation products – for example, polymer modified thin layer mortared masonry and drystack masonry. As comprehensive experimental studies are very expensive, an analytical model inspired by damage mechanics is developed and applied to the prediction of the compressive behaviour of masonry in this paper. The model incorporates a parabolic progressively softening stress-strain curve for the units and a progressively stiffening stress-strain curve until a threshold strain for the combined mortar and the unit-mortar interfaces is reached. The model simulates the mutual constraints imposed by each of these constituents through their respective tensile and compressive behaviour and volumetric changes. The advantage of the model is that it requires only the properties of the constituents and considers masonry as a continuum and computes the average properties of the composite masonry prisms/wallettes; it does not require discretisation of prism or wallette similar to the finite element methods. The capability of the model in capturing the phenomenological behaviour of masonry with appropriate elastic response, stiffness degradation and post peak softening is presented through numerical examples. The fitting of the experimental data to the model parameters is demonstrated through calibration of some selected test data on units and mortar from the literature; the calibrated model is shown to predict the responses of the experimentally determined masonry built using the corresponding units and mortar quite well. Through a series of sensitivity studies, the model is also shown to predict the masonry strength appropriately for changes to the properties of the units and mortar, the mortar joint thickness and the ratio of the height of unit to mortar joint thickness. The unit strength is shown to affect the masonry strength significantly. Although the mortar strength has only a marginal effect, reduction in mortar joint thickness is shown to have a profound effect on the masonry strength. The results obtained from the model are compared with the various provisions in the Australian Masonry Structures Standard AS3700 (2011) and Eurocode 6.
Resumo:
Capacity probability models of generating units are commonly used in many power system reliability studies, at hierarchical level one (HLI). Analytical modelling of a generating system with many units or generating units with many derated states in a system, can result in an extensive number of states in the capacity model. Limitations on available memory and computational time of present computer facilities can pose difficulties for assessment of such systems in many studies. A cluster procedure using the nearest centroid sorting method was used for IEEE-RTS load model. The application proved to be very effective in producing a highly similar model with substantially fewer states. This paper presents an extended application of the clustering method to include capacity probability representation. A series of sensitivity studies are illustrated using IEEE-RTS generating system and load models. The loss of load and energy expectations (LOLE, LOEE), are used as indicators to evaluate the application
Resumo:
This thesis applies Monte Carlo techniques to the study of X-ray absorptiometric methods of bone mineral measurement. These studies seek to obtain information that can be used in efforts to improve the accuracy of the bone mineral measurements. A Monte Carlo computer code for X-ray photon transport at diagnostic energies has been developed from first principles. This development was undertaken as there was no readily available code which included electron binding energy corrections for incoherent scattering and one of the objectives of the project was to study the effects of inclusion of these corrections in Monte Carlo models. The code includes the main Monte Carlo program plus utilities for dealing with input data. A number of geometrical subroutines which can be used to construct complex geometries have also been written. The accuracy of the Monte Carlo code has been evaluated against the predictions of theory and the results of experiments. The results show a high correlation with theoretical predictions. In comparisons of model results with those of direct experimental measurements, agreement to within the model and experimental variances is obtained. The code is an accurate and valid modelling tool. A study of the significance of inclusion of electron binding energy corrections for incoherent scatter in the Monte Carlo code has been made. The results show this significance to be very dependent upon the type of application. The most significant effect is a reduction of low angle scatter flux for high atomic number scatterers. To effectively apply the Monte Carlo code to the study of bone mineral density measurement by photon absorptiometry the results must be considered in the context of a theoretical framework for the extraction of energy dependent information from planar X-ray beams. Such a theoretical framework is developed and the two-dimensional nature of tissue decomposition based on attenuation measurements alone is explained. This theoretical framework forms the basis for analytical models of bone mineral measurement by dual energy X-ray photon absorptiometry techniques. Monte Carlo models of dual energy X-ray absorptiometry (DEXA) have been established. These models have been used to study the contribution of scattered radiation to the measurements. It has been demonstrated that the measurement geometry has a significant effect upon the scatter contribution to the detected signal. For the geometry of the models studied in this work the scatter has no significant effect upon the results of the measurements. The model has also been used to study a proposed technique which involves dual energy X-ray transmission measurements plus a linear measurement of the distance along the ray path. This is designated as the DPA( +) technique. The addition of the linear measurement enables the tissue decomposition to be extended to three components. Bone mineral, fat and lean soft tissue are the components considered here. The results of the model demonstrate that the measurement of bone mineral using this technique is stable over a wide range of soft tissue compositions and hence would indicate the potential to overcome a major problem of the two component DEXA technique. However, the results also show that the accuracy of the DPA( +) technique is highly dependent upon the composition of the non-mineral components of bone and has poorer precision (approximately twice the coefficient of variation) than the standard DEXA measurements. These factors may limit the usefulness of the technique. These studies illustrate the value of Monte Carlo computer modelling of quantitative X-ray measurement techniques. The Monte Carlo models of bone densitometry measurement have:- 1. demonstrated the significant effects of the measurement geometry upon the contribution of scattered radiation to the measurements, 2. demonstrated that the statistical precision of the proposed DPA( +) three tissue component technique is poorer than that of the standard DEXA two tissue component technique, 3. demonstrated that the proposed DPA(+) technique has difficulty providing accurate simultaneous measurement of body composition in terms of a three component model of fat, lean soft tissue and bone mineral,4. and provided a knowledge base for input to decisions about development (or otherwise) of a physical prototype DPA( +) imaging system. The Monte Carlo computer code, data, utilities and associated models represent a set of significant, accurate and valid modelling tools for quantitative studies of physical problems in the fields of diagnostic radiology and radiography.
Analytical modeling and sensitivity analysis for travel time estimation on signalized urban networks
Resumo:
This paper presents a model for estimation of average travel time and its variability on signalized urban networks using cumulative plots. The plots are generated based on the availability of data: a) case-D, for detector data only; b) case-DS, for detector data and signal timings; and c) case-DSS, for detector data, signal timings and saturation flow rate. The performance of the model for different degrees of saturation and different detector detection intervals is consistent for case-DSS and case-DS whereas, for case-D the performance is inconsistent. The sensitivity analysis of the model for case-D indicates that it is sensitive to detection interval and signal timings within the interval. When detection interval is integral multiple of signal cycle then it has low accuracy and low reliability. Whereas, for detection interval around 1.5 times signal cycle both accuracy and reliability are high.
Resumo:
Popular wireless network standards, such as IEEE 802.11/15/16, are increasingly adopted in real-time control systems. However, they are not designed for real-time applications. Therefore, the performance of such wireless networks needs to be carefully evaluated before the systems are implemented and deployed. While efforts have been made to model general wireless networks with completely random traffic generation, there is a lack of theoretical investigations into the modelling of wireless networks with periodic real-time traffic. Considering the widely used IEEE 802.11 standard, with the focus on its distributed coordination function (DCF), for soft-real-time control applications, this paper develops an analytical Markov model to quantitatively evaluate the network quality-of-service (QoS) performance in periodic real-time traffic environments. Performance indices to be evaluated include throughput capacity, transmission delay and packet loss ratio, which are crucial for real-time QoS guarantee in real-time control applications. They are derived under the critical real-time traffic condition, which is formally defined in this paper to characterize the marginal satisfaction of real-time performance constraints.
Resumo:
We develop a new analytical solution for a reactive transport model that describes the steady-state distribution of oxygen subject to diffusive transport and nonlinear uptake in a sphere. This model was originally reported by Lin (Journal of Theoretical Biology, 1976 v60, pp449–457) to represent the distribution of oxygen inside a cell and has since been studied extensively by both the numerical analysis and formal analysis communities. Here we extend these previous studies by deriving an analytical solution to a generalized reaction-diffusion equation that encompasses Lin’s model as a particular case. We evaluate the solution for the parameter combinations presented by Lin and show that the new solutions are identical to a grid-independent numerical approximation.
Resumo:
Problems involving the solution of advection-diffusion-reaction equations on domains and subdomains whose growth affects and is affected by these equations, commonly arise in developmental biology. Here, a mathematical framework for these situations, together with methods for obtaining spatio-temporal solutions and steady states of models built from this framework, is presented. The framework and methods are applied to a recently published model of epidermal skin substitutes. Despite the use of Eulerian schemes, excellent agreement is obtained between the numerical spatio-temporal, numerical steady state, and analytical solutions of the model.
Resumo:
The emergence of highly chloroquine (CQ) resistant P. vivax in Southeast Asia has created an urgent need for an improved understanding of the mechanisms of drug resistance in these parasites, the development of robust tools for defining the spread of resistance, and the discovery of new antimalarial agents. The ex vivo Schizont Maturation Test (SMT), originally developed for the study of P. falciparum, has been modified for P. vivax. We retrospectively analysed the results from 760 parasite isolates assessed by the modified SMT to investigate the relationship between parasite growth dynamics and parasite susceptibility to antimalarial drugs. Previous observations of the stage-specific activity of CQ against P. vivax were confirmed, and shown to have profound consequences for interpretation of the assay. Using a nonlinear model we show increased duration of the assay and a higher proportion of ring stages in the initial blood sample were associated with decreased effective concentration (EC50) values of CQ, and identify a threshold where these associations no longer hold. Thus, starting composition of parasites in the SMT and duration of the assay can have a profound effect on the calculated EC50 for CQ. Our findings indicate that EC50 values from assays with a duration less than 34 hours do not truly reflect the sensitivity of the parasite to CQ, nor an assay where the proportion of ring stage parasites at the start of the assay does not exceed 66%. Application of this threshold modelling approach suggests that similar issues may occur for susceptibility testing of amodiaquine and mefloquine. The statistical methodology which has been developed also provides a novel means of detecting stage-specific drug activity for new antimalarials.
Resumo:
This chapter represents the analytical solution of two-dimensional linear stretching sheet problem involving a non-Newtonian liquid and suction by (a) invoking the boundary layer approximation and (b) using this result to solve the stretching sheet problem without using boundary layer approximation. The basic boundary layer equations for momentum, which are non-linear partial differential equations, are converted into non-linear ordinary differential equations by means of similarity transformation. The results reveal a new analytical procedure for solving the boundary layer equations arising in a linear stretching sheet problem involving a non-Newtonian liquid (Walters’ liquid B). The present study throws light on the analytical solution of a class of boundary layer equations arising in the stretching sheet problem.
Resumo:
Computational fluid dynamics, analytical solutions, and mathematical modelling approaches are used to gain insights into the distribution of fumigant gas within farm-scale, grain storage silos. Both fan-forced and tablet fumigation are considered in this work, which develops new models for use by researchers, primary producers and silo manufacturers to assist in the eradication grain storage pests.
Resumo:
This project provides a steppingstone to comprehend the mechanisms that govern particulate fouling in metal foam heat exchangers. The method is based on development of an advanced Computational Fluid Dynamics model in addition to performing analytical validation. This novel method allows an engineer to better optimize heat exchanger designs, thereby mitigating fouling, reducing energy consumption caused by fouling, economize capital expenditure on heat exchanger maintenance, and reduce operation downtime. The robust model leads to the establishment of an alternative heat exchanger configuration that has lower pressure drop and particulate deposition propensity.