932 resultados para flow modelling
Resumo:
A scaling analysis for the natural convection boundary layer adjacent to an inclined semi-infinite plate subject to a non-instantaneous heating in the form of an imposed wall temperature which increases linearly up to a prescribed steady value over a prescribed time is reported. The development of the boundary layer flow from start-up to a steady-state has been described based on scaling analyses and verified by numerical simulations. The analysis reveals that, if the period of temperature growth on the wall is sufficiently long, the boundary layer reaches a quasi-steady mode before the growth of the temperature is completed. In this mode the thermal boundary layer at first grows in thickness and then contracts with increasing time. However, if the imposed wall temperature growth period is sufficiently short, the boundary layer develops differently, but after the wall temperature growth is completed, the boundary layer develops as though the startup had been instantaneous. The steady state values of the boundary layer for both cases are ultimately the same.
Resumo:
Handling information overload online, from the user's point of view is a big challenge, especially when the number of websites is growing rapidly due to growth in e-commerce and other related activities. Personalization based on user needs is the key to solving the problem of information overload. Personalization methods help in identifying relevant information, which may be liked by a user. User profile and object profile are the important elements of a personalization system. When creating user and object profiles, most of the existing methods adopt two-dimensional similarity methods based on vector or matrix models in order to find inter-user and inter-object similarity. Moreover, for recommending similar objects to users, personalization systems use the users-users, items-items and users-items similarity measures. In most cases similarity measures such as Euclidian, Manhattan, cosine and many others based on vector or matrix methods are used to find the similarities. Web logs are high-dimensional datasets, consisting of multiple users, multiple searches with many attributes to each. Two-dimensional data analysis methods may often overlook latent relationships that may exist between users and items. In contrast to other studies, this thesis utilises tensors, the high-dimensional data models, to build user and object profiles and to find the inter-relationships between users-users and users-items. To create an improved personalized Web system, this thesis proposes to build three types of profiles: individual user, group users and object profiles utilising decomposition factors of tensor data models. A hybrid recommendation approach utilising group profiles (forming the basis of a collaborative filtering method) and object profiles (forming the basis of a content-based method) in conjunction with individual user profiles (forming the basis of a model based approach) is proposed for making effective recommendations. A tensor-based clustering method is proposed that utilises the outcomes of popular tensor decomposition techniques such as PARAFAC, Tucker and HOSVD to group similar instances. An individual user profile, showing the user's highest interest, is represented by the top dimension values, extracted from the component matrix obtained after tensor decomposition. A group profile, showing similar users and their highest interest, is built by clustering similar users based on tensor decomposed values. A group profile is represented by the top association rules (containing various unique object combinations) that are derived from the searches made by the users of the cluster. An object profile is created to represent similar objects clustered on the basis of their similarity of features. Depending on the category of a user (known, anonymous or frequent visitor to the website), any of the profiles or their combinations is used for making personalized recommendations. A ranking algorithm is also proposed that utilizes the personalized information to order and rank the recommendations. The proposed methodology is evaluated on data collected from a real life car website. Empirical analysis confirms the effectiveness of recommendations made by the proposed approach over other collaborative filtering and content-based recommendation approaches based on two-dimensional data analysis methods.
Resumo:
Mixture models are a flexible tool for unsupervised clustering that have found popularity in a vast array of research areas. In studies of medicine, the use of mixtures holds the potential to greatly enhance our understanding of patient responses through the identification of clinically meaningful clusters that, given the complexity of many data sources, may otherwise by intangible. Furthermore, when developed in the Bayesian framework, mixture models provide a natural means for capturing and propagating uncertainty in different aspects of a clustering solution, arguably resulting in richer analyses of the population under study. This thesis aims to investigate the use of Bayesian mixture models in analysing varied and detailed sources of patient information collected in the study of complex disease. The first aim of this thesis is to showcase the flexibility of mixture models in modelling markedly different types of data. In particular, we examine three common variants on the mixture model, namely, finite mixtures, Dirichlet Process mixtures and hidden Markov models. Beyond the development and application of these models to different sources of data, this thesis also focuses on modelling different aspects relating to uncertainty in clustering. Examples of clustering uncertainty considered are uncertainty in a patient’s true cluster membership and accounting for uncertainty in the true number of clusters present. Finally, this thesis aims to address and propose solutions to the task of comparing clustering solutions, whether this be comparing patients or observations assigned to different subgroups or comparing clustering solutions over multiple datasets. To address these aims, we consider a case study in Parkinson’s disease (PD), a complex and commonly diagnosed neurodegenerative disorder. In particular, two commonly collected sources of patient information are considered. The first source of data are on symptoms associated with PD, recorded using the Unified Parkinson’s Disease Rating Scale (UPDRS) and constitutes the first half of this thesis. The second half of this thesis is dedicated to the analysis of microelectrode recordings collected during Deep Brain Stimulation (DBS), a popular palliative treatment for advanced PD. Analysis of this second source of data centers on the problems of unsupervised detection and sorting of action potentials or "spikes" in recordings of multiple cell activity, providing valuable information on real time neural activity in the brain.
Resumo:
The natural convection thermal boundary layer adjacent to an inclined flat plate and inclined walls of an attic space subject to instantaneous and ramp heating and cooling is investigated. A scaling analysis has been performed to describe the flow behaviour and heat transfer. Major scales quantifying the flow velocity, flow development time, heat transfer and the thermal and viscous boundary layer thicknesses at different stages of the flow development are established. Scaling relations of heating-up and cooling-down times and heat transfer rates have also been reported for the case of attic space. The scaling relations have been verified by numerical simulations over a wide range of parameters. Further, a periodic temperature boundary condition is also considered to show the flow features in the attic space over diurnal cycles.
Resumo:
This paper is concerned with investigating existing and potential scope of Dublin Core metadata in Knowledge Management contexts. Modelling knowledge is identified as a conceptual prerequisite in this investigation, principally for the purpose of clarifying scope prior to identifying the range of tasks associated with organising knowledge. A variety of models is presented and relationships between data, information, and knowledge discussed. It is argued that the two most common modes of organisation, hierarchies and networks, influence the effectiveness and flow of knowledge. Practical perspective is provided by reference to implementations and projects providing evidence of how DC metadata is applied in such contexts. A sense-making model is introduced that can be used as a shorthand reference for identifying useful facets of knowledge that might be described using metadata. Discussion is aimed at presenting this model in a way that both validates current applications and points to potential novel applications.
Resumo:
Open pit mine operations are complex businesses that demand a constant assessment of risk. This is because the value of a mine project is typically influenced by many underlying economic and physical uncertainties, such as metal prices, metal grades, costs, schedules, quantities, and environmental issues, among others, which are not known with much certainty at the beginning of the project. Hence, mining projects present a considerable challenge to those involved in associated investment decisions, such as the owners of the mine and other stakeholders. In general terms, when an option exists to acquire a new or operating mining project, , the owners and stock holders of the mine project need to know the value of the mining project, which is the fundamental criterion for making final decisions about going ahead with the venture capital. However, obtaining the mine project’s value is not an easy task. The reason for this is that sophisticated valuation and mine optimisation techniques, which combine advanced theories in geostatistics, statistics, engineering, economics and finance, among others, need to be used by the mine analyst or mine planner in order to assess and quantify the existing uncertainty and, consequently, the risk involved in the project investment. Furthermore, current valuation and mine optimisation techniques do not complement each other. That is valuation techniques based on real options (RO) analysis assume an expected (constant) metal grade and ore tonnage during a specified period, while mine optimisation (MO) techniques assume expected (constant) metal prices and mining costs. These assumptions are not totally correct since both sources of uncertainty—that of the orebody (metal grade and reserves of mineral), and that about the future behaviour of metal prices and mining costs—are the ones that have great impact on the value of any mining project. Consequently, the key objective of this thesis is twofold. The first objective consists of analysing and understanding the main sources of uncertainty in an open pit mining project, such as the orebody (in situ metal grade), mining costs and metal price uncertainties, and their effect on the final project value. The second objective consists of breaking down the wall of isolation between economic valuation and mine optimisation techniques in order to generate a novel open pit mine evaluation framework called the ―Integrated Valuation / Optimisation Framework (IVOF)‖. One important characteristic of this new framework is that it incorporates the RO and MO valuation techniques into a single integrated process that quantifies and describes uncertainty and risk in a mine project evaluation process, giving a more realistic estimate of the project’s value. To achieve this, novel and advanced engineering and econometric methods are used to integrate financial and geological uncertainty into dynamic risk forecasting measures. The proposed mine valuation/optimisation technique is then applied to a real gold disseminated open pit mine deposit to estimate its value in the face of orebody, mining costs and metal price uncertainties.
Resumo:
This paper argues for a renewed focus on statistical reasoning in the beginning school years, with opportunities for children to engage in data modelling. Results are reported from the first year of a 3-year longitudinal study in which three classes of first-grade children (6-year-olds) and their teachers engaged in data modelling activities. The theme of Looking after our Environment, part of the children’s science curriculum, provided the task context. The goals for the two activities addressed here included engaging children in core components of data modelling, namely, selecting attributes, structuring and representing data, identifying variation in data, and making predictions from given data. Results include the various ways in which children represented and re represented collected data, including attribute selection, and the metarepresentational competence they displayed in doing so. The “data lenses” through which the children dealt with informal inference (variation and prediction) are also reported.
Resumo:
For many people, a relatively large proportion of daily exposure to a multitude of pollutants may occur inside an automobile. A key determinant of exposure is the amount of outdoor air entering the cabin (i.e. air change or flow rate). We have quantified this parameter in six passenger vehicles ranging in age from 18 years to <1 year, at three vehicle speeds and under four different ventilation settings. Average infiltration into the cabin with all operable air entry pathways closed was between 1 and 33.1 air changes per hour (ACH) at a vehicle speed of 60 km/h, and between 2.6 and 47.3 ACH at 110 km/h, with these results representing the most (2005 Volkswagen Golf) and least air-tight (1989 Mazda 121) vehicles, respectively. Average infiltration into stationary vehicles parked outdoors varied between ~0 and 1.4 ACH and was moderately related to wind speed. Measurements were also performed under an air recirculation setting with low fan speed, while airflow rate measurements were conducted under two non-recirculate ventilation settings with low and high fan speeds. The windows were closed in all cases, and over 200 measurements were performed. The results can be applied to estimate pollutant exposure inside vehicles.
Resumo:
Three-dimensional wagon train models have been developed for the crashworthiness analysis using multi-body dynamics approach. The contributions of the train size (number of wagon) to the frontal crash forces can be identified through the simulations. The effects of crash energy management (CEM) design and crash speed on train crashworthiness are examined. The CEM design can significantly improve the train crashworthiness and the consequential vehicle stability performance - reducing derailment risks.
Resumo:
The growth of solid tumours beyond a critical size is dependent upon angiogenesis, the formation of new blood vessels from an existing vasculature. Tumours may remain dormant at microscopic sizes for some years before switching to a mode in which growth of a supportive vasculature is initiated. The new blood vessels supply nutrients, oxygen, and access to routes by which tumour cells may travel to other sites within the host (metastasize). In recent decades an abundance of biological research has focused on tumour-induced angiogenesis in the hope that treatments targeted at the vasculature may result in a stabilisation or regression of the disease: a tantalizing prospect. The complex and fascinating process of angiogenesis has also attracted the interest of researchers in the field of mathematical biology, a discipline that is, for mathematics, relatively new. The challenge in mathematical biology is to produce a model that captures the essential elements and critical dependencies of a biological system. Such a model may ultimately be used as a predictive tool. In this thesis we examine a number of aspects of tumour-induced angiogenesis, focusing on growth of the neovasculature external to the tumour. Firstly we present a one-dimensional continuum model of tumour-induced angiogenesis in which elements of the immune system or other tumour-cytotoxins are delivered via the newly formed vessels. This model, based on observations from experiments by Judah Folkman et al., is able to show regression of the tumour for some parameter regimes. The modelling highlights a number of interesting aspects of the process that may be characterised further in the laboratory. The next model we present examines the initiation positions of blood vessel sprouts on an existing vessel, in a two-dimensional domain. This model hypothesises that a simple feedback inhibition mechanism may be used to describe the spacing of these sprouts with the inhibitor being produced by breakdown of the existing vessel's basement membrane. Finally, we have developed a stochastic model of blood vessel growth and anastomosis in three dimensions. The model has been implemented in C++, includes an openGL interface, and uses a novel algorithm for calculating proximity of the line segments representing a growing vessel. This choice of programming language and graphics interface allows for near-simultaneous calculation and visualisation of blood vessel networks using a contemporary personal computer. In addition the visualised results may be transformed interactively, and drop-down menus facilitate changes in the parameter values. Visualisation of results is of vital importance in the communication of mathematical information to a wide audience, and we aim to incorporate this philosophy in the thesis. As biological research further uncovers the intriguing processes involved in tumourinduced angiogenesis, we conclude with a comment from mathematical biologist Jim Murray, Mathematical biology is : : : the most exciting modern application of mathematics.
Resumo:
Abstract: LiteSteel beam (LSB) is a new cold-formed steel hollow flange channel beam produced using a patented manufacturing process involving simultaneous cold-forming and dual electric resistance welding. It has the beneficial characteristics of torsionally rigid closed rectangular flanges combined with economical fabrication processes from a single strip of high strength steel. Although the LSB sections are commonly used as flexural members, no research has been undertaken on the shear behaviour of LSBs. Therefore experimental and numerical studies were undertaken to investigate the shear behaviour and strength of LSBs. In this research finite element models of LSBs were developed to investigate their nonlinear shear behaviour including their buckling characteristics and ultimate shear strength. They were validated by comparing their results with available experimental results. The models provided full details of the shear buckling and strength characteristics of LSBs, and showed the presence of considerable improvements to web shear buckling in LSBs and associated post-buckling strength. This paper presents the details of the finite element models of LSBs and the results. Both finite element analysis and experimental results showed that the current design rules in cold-formed steel codes are very conservative for the shear design of LSBs. The ultimate shear capacities from finite element analyses confirmed the accuracy of proposed shear strength equations for LSBs based on the North American specification and DSM design equations. Developed finite element models were used to investigate the reduction to shear capacity of LSBs when full height web side plates were not used or when only one web side plate was used, and these results are also presented in this paper.
Resumo:
An improved scaling analysis and direct numerical simulations are performed for the unsteady natural convection boundary layer adjacent to a downward facing inclined plate with uniform heat flux. The development of the thermal or viscous boundary layers may be classified into three distinct stages: a start-up stage, a transitional stage and a steady stage, which can be clearly identified in the analytical as well as the numerical results. Previous scaling shows that the existing scaling laws of the boundary layer thickness, velocity and steady state time scale for the natural convection flow on a heated plate of uniform heat flux provide a very poor prediction of the Prandtl number dependency of the flow. However, those scalings perform very well with Rayleigh number and aspect ratio dependency. In this study, a modified Prandtl number scaling is developed using a triple layer integral approach for Pr > 1. It is seen that in comparison to the direct numerical simulations, the modified scaling performs considerably better than the previous scaling.
Resumo:
Recently an innovative composite panel system was developed, where a thin insulation layer was used externally between two plasterboards to improve the fire performance of light gauge cold-formed steel frame walls. In this research, finite-element thermal models of both the traditional light gauge cold-formed steel frame wall panels with cavity insulation and the new light gauge cold-formed steel frame composite wall panels were developed to simulate their thermal behaviour under standard and realistic fire conditions. Suitable apparent thermal properties of gypsum plasterboard, insulation materials and steel were proposed and used. The developed models were then validated by comparing their results with available fire test results. This article presents the details of the developed finite-element models of small-scale non-load-bearing light gauge cold-formed steel frame wall panels and the results of the thermal analysis. It has been shown that accurate finite-element models can be used to simulate the thermal behaviour of small-scale light gauge cold-formed steel frame walls with varying configurations of insulations and plasterboards. The numerical results show that the use of cavity insulation was detrimental to the fire rating of light gauge cold-formed steel frame walls, while the use of external insulation offered superior thermal protection to them. The effects of real fire conditions are also presented.
Resumo:
The complex interaction of the bones of the foot has been explored in detail in recent years, which has led to the acknowledgement in the biomechanics community that the foot can no longer be considered as a single rigid segment. With the advance of motion analysis technology it has become possible to quantify the biomechanics of simplified units or segments that make up the foot. Advances in technology coupled with reducing hardware prices has resulted in the uptake of more advanced tools available for clinical gait analysis. The increased use of these techniques in clinical practice requires defined standards for modelling and reporting of foot and ankle kinematics. This systematic review aims to provide a critical appraisal of commonly used foot and ankle marker sets designed to assess kinematics and thus provide a theoretical background for the development of modelling standards.
Resumo:
The Web Service Business Process Execution Language (BPEL) lacks any standard graphical notation. Various efforts have been undertaken to visualize BPEL using the Business Process Modelling Notation (BPMN). Although this is straightforward for the majority of concepts, it is tricky for the full BPEL standard, partly due to the insufficiently specified BPMN execution semantics. The upcoming BPMN 2.0 revision will provide this clear semantics. In this paper, we show how the dead path elimination (DPE) capabilities of BPEL can be expressed with this new semantics and discuss the limitations. We provide a generic formal definition of DPE and discuss resulting control flow requirements independent of specific process description languages.