927 resultados para turbulence modelling theory
Resumo:
A system is something that can be separated from its surrounds, but this definition leaves much scope for refinement. Starting with the notion of measurement, we explore increasingly contextual system behaviour, and identify three major forms of contextuality that might be exhibited by a system: (a) between components; (b) between system and experimental method, and; (c) between a system and its environment. Quantum Theory is shown to provide a highly useful formalism from which all three forms of contextuality can be analysed, offering numerous tests for contextual behaviour, as well as modelling possibilities for systems that do indeed display it. I conclude with the introduction of a Contextualised General Systems Theory based upon an extension of this formalism.
Resumo:
Quantum-like models can be fruitfully used to model attitude change in a social context. Next steps require data, and higher dimensional models. Here, we discuss an exploratory study that demonstrates an order effect when three question sets about Climate Beliefs, Political Affiliation and Attitudes Towards Science are presented in different orders within a larger study of n=533 subjects. A quantum-like model seems possible, and we propose a new experiment which could be used to test between three possible models for this scenario.
Resumo:
Building information models are increasingly being utilised for facility management of large facilities such as critical infrastructures. In such environments, it is valuable to utilise the vast amount of data contained within the building information models to improve access control administration. The use of building information models in access control scenarios can provide 3D visualisation of buildings as well as many other advantages such as automation of essential tasks including path finding, consistency detection, and accessibility verification. However, there is no mathematical model for building information models that can be used to describe and compute these functions. In this paper, we show how graph theory can be utilised as a representation language of building information models and the proposed security related functions. This graph-theoretic representation allows for mathematically representing building information models and performing computations using these functions.
Resumo:
Solid-extracellular fluid interaction is believed to play an important role in the strain-rate dependent mechanical behaviors of shoulder articular cartilages. It is believed that the kangaroo shoulder joint is anatomically and biomechanically similar to human shoulder joint and it is easy to get in Australia. Therefore, the kangaroo humeral head cartilage was used as the suitable tissue for the study in this paper. Indentation tests from quasi-static (10-4/sec) to moderately high strain-rate (10-2/sec) on kangaroo humeral head cartilage tissues were conduced to investigate the strain-rate dependent behaviors. A finite element (FE) model was then developed, in which cartilage was conceptualized as a porous solid matrix filled with incompressible fluids. In this model, the solid matrix was modeled as an isotropic hyperelastic material and the percolating fluid follows Darcy’s law. Using inverse FE procedure, the constitutive parameters related to stiffness, compressibility of the solid matrix and permeability were obtained from the experimental results. The effect of solid-extracellular fluid interaction and drag force (the resistance to fluid movement) on strain-rate dependent behavior was investigated by comparing the influence of constant, strain dependent and strain-rate dependent permeability on FE model prediction. The newly developed porohyperelastic cartilage model with the inclusion of strain-rate dependent permeability was found to be able to predict the strain-rate dependent behaviors of cartilages.
Resumo:
This paper presents a novel framework for the modelling of passenger facilitation in a complex environment. The research is motivated by the challenges in the airport complex system, where there are multiple stakeholders, differing operational objectives and complex interactions and interdependencies between different parts of the airport system. Traditional methods for airport terminal modelling do not explicitly address the need for understanding causal relationships in a dynamic environment. Additionally, existing Bayesian Network (BN) models, which provide a means for capturing causal relationships, only present a static snapshot of a system. A method to integrate a BN complex systems model with stochastic queuing theory is developed based on the properties of the Poisson and exponential distributions. The resultant Hybrid Queue-based Bayesian Network (HQBN) framework enables the simulation of arbitrary factors, their relationships, and their effects on passenger flow and vice versa. A case study implementation of the framework is demonstrated on the inbound passenger facilitation process at Brisbane International Airport. The predicted outputs of the model, in terms of cumulative passenger flow at intermediary and end points in the inbound process, are found to have an R2 goodness of fit of 0.9994 and 0.9982 respectively over a 10 h test period. The utility of the framework is demonstrated on a number of usage scenarios including causal analysis and ‘what-if’ analysis. This framework provides the ability to analyse and simulate a dynamic complex system, and can be applied to other socio-technical systems such as hospitals.
Resumo:
Process modeling – the design and use of graphical documentations of an organization’s business processes – is a key method to document and use information about the operations of businesses. Still, despite current interest in process modeling, this research area faces essential challenges. Key unanswered questions concern the impact of process modeling in organizational practice, and the mechanisms through which impacts are developed. To answer these questions and to provide a better understanding of process modeling impact, I turn to the concept of affordances. Affordances describe the possibilities for goal-oriented action that a technical object offers to a user. This notion has received growing attention from IS researchers. The purpose of my research is to further develop the IS discipline’s understanding of affordances and impacts from information objects, such as process models used by analysts for information systems analysis and design. Specifically, I seek to extend existing theory on the emergence, perception and actualization of affordances. I develop a research model that describes the process by which affordances emerge between an individual and an object, how affordances are perceived, and how they are actualized by the individual. The proposed model also explains the role of available information for the individual, and the influence of perceived actualization effort. I operationalize and test this research model empirically, using a full-cycle, mixed methods study consisting of case study and experiment.
Resumo:
Many nations are highlighting the need for a renaissance in the mathematical sciences as essential to the well-being of all citizens (e.g., Australian Academy of Science, 2006; 2010; The National Academies, 2009). Indeed, the first recommendation of The National Academies’ Rising Above the Storm (2007) was to vastly improve K–12 science and mathematics education. The subsequent report, Rising Above the Gathering Storm Two Years Later (2009), highlighted again the need to target mathematics and science from the earliest years of schooling: “It takes years or decades to build the capability to have a society that depends on science and technology . . . You need to generate the scientists and engineers, starting in elementary and middle school” (p. 9). Such pleas reflect the rapidly changing nature of problem solving and reasoning needed in today’s world, beyond the classroom. As The National Academies (2009) reported, “Today the problems are more complex than they were in the 1950s, and more global. They’ll require a new educated workforce, one that is more open, collaborative, and cross-disciplinary” (p. 19). The implications for the problem solving experiences we implement in schools are far-reaching. In this chapter, I consider problem solving and modelling in the primary school, beginning with the need to rethink the experiences we provide in the early years. I argue for a greater awareness of the learning potential of young children and the need to provide stimulating learning environments. I then focus on data modelling as a powerful means of advancing children’s statistical reasoning abilities, which they increasingly need as they navigate their data-drenched world.
Resumo:
Organizational and technological systems analysis and design practices such as process modeling have received much attention in recent years. However, while knowledge about related artifacts such as models, tools, or grammars has substantially matured, little is known about the actual tasks and interaction activities that are conducted as part of analysis and design acts. In particular, key role of the facilitator has not been researched extensively to date. In this paper, we propose a new conceptual framework that can be used to examine facilitation behaviors in process modeling projects. The framework distinguishes four behavioral styles in facilitation (the driving engineer, the driving artist, the catalyzing engineer, and the catalyzing artist) that a facilitator can adopt. To distinguish between the four styles, we provide a set of ten behavioral anchors that underpin facilitation behaviors. We also report on a preliminary empirical exploration of our framework through interviews with experienced analysts in six modeling cases. Our research provides a conceptual foundation for an emerging theory for describing and explaining different behaviors associated with process modeling facilitation, provides first preliminary empirical results about facilitation in modeling projects, and provides a fertile basis for examining facilitation in other conceptual modeling activities.
Resumo:
In an estuary, mixing and dispersion result from a combination of large-scale advection and smallscale turbulence, which are complex to estimate. The predictions of scalar transport and mixing are often inferred and rarely accurate, due to inadequate understanding of the contributions of these difference scales to estuarine recirculation. A multi-device field study was conducted in a small sub-tropical estuary under neap tide conditions with near-zero fresh water discharge for about 48 hours. During the study, acoustic Doppler velocimeters (ADV) were sampled at high frequency (50 Hz), while an acoustic Doppler current profiler (ADCP) and global positioning system (GPS) tracked drifters were used to obtain some lower frequency spatial distribution of the flow parameters within the estuary. The velocity measurements were complemented with some continuous measurement of water depth, conductivity, temperature and some other physiochemical parameters. Thorough quality control was carried out by implementation of relevant error removal filters on the individual data set to intercept spurious data. A triple decomposition (TD) technique was introduced to access the contributions of tides, resonance and ‘true’ turbulence in the flow field. The time series of mean flow measurements for both the ADCP and drifter were consistent with those of the mean ADV data when sampled within a similar spatial domain. The tidal scale fluctuation of velocity and water level were used to examine the response of the estuary to tidal inertial current. The channel exhibited a mixed type wave with a typical phase-lag between 0.035π– 0.116π. A striking feature of the ADV velocity data was the slow fluctuations, which exhibited large amplitudes of up to 50% of the tidal amplitude, particularly in slack waters. Such slow fluctuations were simultaneously observed in a number of physiochemical properties of the channel. The ensuing turbulence field showed some degree of anisotropy. For all ADV units, the horizontal turbulence ratio ranged between 0.4 and 0.9, and decreased towards the bed, while the vertical turbulence ratio was on average unity at z = 0.32 m and approximately 0.5 for the upper ADV (z = 0.55 m). The result of the statistical analysis suggested that the ebb phase turbulence field was dominated by eddies that evolved from ejection type process, while that of the flood phase contained mixed eddies with significant amount related to sweep type process. Over 65% of the skewness values fell within the range expected of a finite Gaussian distribution and the bulk of the excess kurtosis values (over 70%) fell within the range of -0.5 and +2. The TD technique described herein allowed the characterisation of a broader temporal scale of fluctuations of the high frequency data sampled within the durations of a few tidal cycles. The study provides characterisation of the ranges of fluctuation required for an accurate modelling of shallow water dispersion and mixing in a sub-tropical estuary.
Resumo:
Searching for efficient solid sorbents for CO2 adsorption and separation is important for developing emergent carbon reduction and natural gas purification technology. This work, for the first time, has investigated the adsorption of CO2 on newly experimentally realized cage-like B40 fullerene (Zhai et al., 2014) based on density functional theory calculations. We find that the adsorption of CO2 on B40 fullerene involves a relatively large energy barrier (1.21 eV), however this can be greatly decreased to 0.35 eV by introducing an extra electron. A practical way to realize negatively charged B40 fullerene is then proposed by encapsulating a Li atom into the B40 fullerene (Li@B40). Li@B40 is found to be highly stable and can significantly enhance both the thermodynamics and kinetics of CO2 adsorption, while the adsorptions of N2, CH4 and H2 on the Li@B40 fullerene remain weak in comparison. Since B40 fullerene has been successfully synthesized in a most recent experiment, our results highlight a new promising material for CO2 capture and separation for future experimental validation.
Resumo:
Modelling fluvial processes is an effective way to reproduce basin evolution and to recreate riverbed morphology. However, due to the complexity of alluvial environments, deterministic modelling of fluvial processes is often impossible. To address the related uncertainties, we derive a stochastic fluvial process model on the basis of the convective Exner equation that uses the statistics (mean and variance) of river velocity as input parameters. These statistics allow for quantifying the uncertainty in riverbed topography, river discharge and position of the river channel. In order to couple the velocity statistics and the fluvial process model, the perturbation method is employed with a non-stationary spectral approach to develop the Exner equation as two separate equations: the first one is the mean equation, which yields the mean sediment thickness, and the second one is the perturbation equation, which yields the variance of sediment thickness. The resulting solutions offer an effective tool to characterize alluvial aquifers resulting from fluvial processes, which allows incorporating the stochasticity of the paleoflow velocity.
Resumo:
A simple three-state model permitting two different configurational states for the solvent, together with one for the organic adsorbate, is analysed to derive the adsorption isotherm. The implications of this model regarding pseudo-two-state and pseudo-Frumkin adsorption isotherms are indicated. A critique of the earlier theory of Bockris, Devanathan and Müller is presented in brief.
Resumo:
The importance of interlaminar stresses has prompted a fresh look at the theory of laminated plates. An important feature in modelling such laminates is the need to provide for continuity of some strains and stresses, while at the same time allowing for the discontinuities in the others. A new modelling possibility is examined in this paper. The procedure allows for discontinuities in the in-plane stresses and transverse strains and continuity in the in-plane strains and transverse stresses. This theory is in the form of a heirarchy of formulations each representing an iterative step. Application of the theory is illustrated by considering the example of an infinite laminated strip subjected to sinusoidal loading.
Resumo:
The aim of this thesis was to develop measurement techniques and systems for measuring air quality and to provide information about air quality conditions and the amount of gaseous emissions from semi-insulated and uninsulated dairy buildings in Finland and Estonia. Specialization and intensification in livestock farming, such as in dairy production, is usually accompanied by an increase in concentrated environmental emissions. In addition to high moisture, the presence of dust and corrosive gases, and widely varying gas concentrations in dairy buildings, Finland and Estonia experience winter temperatures reaching below -40 ºC and summer temperatures above +30 ºC. The adaptation of new technologies for long-term air quality monitoring and measurement remains relatively uncommon in dairy buildings because the construction and maintenance of accurate monitoring systems for long-term use are too expensive for the average dairy farmer to afford. Though the documentation of accurate air quality measurement systems intended mainly for research purposes have been made in the past, standardised methods and the documentation of affordable systems and simple methods for performing air quality and emissions measurements in dairy buildings are unavailable. In this study, we built three measurement systems: 1) a Stationary system with integrated affordable sensors for on-site measurements, 2) a Wireless system with affordable sensors for off-site measurements, and 3) a Mobile system consisting of expensive and accurate sensors for measuring air quality. In addition to assessing existing methods, we developed simplified methods for measuring ventilation and emission rates in dairy buildings. The three measurement systems were successfully used to measure air quality in uninsulated, semi-insulated, and fully-insulated dairy buildings between the years 2005 and 2007. When carefully calibrated, the affordable sensors in the systems gave reasonably accurate readings. The spatial air quality survey showed high variation in microclimate conditions in the dairy buildings measured. The average indoor air concentration for carbon dioxide was 950 ppm, for ammonia 5 ppm, for methane 48 ppm, for relative humidity 70%, and for inside air velocity 0.2 m/s. The average winter and summer indoor temperatures during the measurement period were -7º C and +24 ºC for the uninsulated, +3 ºC and +20 ºC for the semi-insulated and +10 ºC and +25 ºC for the fully-insulated dairy buildings. The measurement results showed that the uninsulated dairy buildings had lower indoor gas concentrations and emissions compared to fully insulated buildings. Although occasionally exceeded, the ventilation rates and average indoor air quality in the dairy buildings were largely within recommended limits. We assessed the traditional heat balance, moisture balance, carbon dioxide balance and direct airflow methods for estimating ventilation rates. The direct velocity measurement for the estimation of ventilation rate proved to be impractical for naturally ventilated buildings. Two methods were developed for estimating ventilation rates. The first method is applicable in buildings in which the ventilation can be stopped or completely closed. The second method is useful in naturally ventilated buildings with large openings and high ventilation rates where spatial gas concentrations are heterogeneously distributed. The two traditional methods (carbon dioxide and methane balances), and two newly developed methods (theoretical modelling using Fick s law and boundary layer theory, and the recirculation flux-chamber technique) were used to estimate ammonia emissions from the dairy buildings. Using the traditional carbon dioxide balance method, ammonia emissions per cow from the dairy buildings ranged from 7 g day-1 to 35 g day-1, and methane emissions per cow ranged from 96 g day-1 to 348 g day-1. The developed methods proved to be as equally accurate as the traditional methods. Variation between the mean emissions estimated with the traditional and the developed methods was less than 20%. The developed modelling procedure provided sound framework for examining the impact of production systems on ammonia emissions in dairy buildings.
Resumo:
In order to assess the structural reliability of bridges, an accurate and cost effective Non-Destructive Evaluation (NDE) technology is required to ensure their safe and reliable operation. Over 60% of the Australian National Highway System is prestressed concrete (PSC) bridges according to the Bureau of Transport and Communication Economics (1997). Most of the in-service bridges are more than 30 years old and may experience a heavier traffic load than their original intended level. Use of Ultrasonic waves is continuously increasing for (NDE) and Structural Health Monitoring (SHM) in civil, aerospace, electrical, mechanical applications. Ultrasonic Lamb waves are becoming more popular for NDE because it can propagate long distance and reach hidden regions with less energy loses. The purpose of this study is to numerically quantify prestress force (PSF) of (PSC) beam using the fundamental theory of acoustic-elasticity. A three-dimension finite element modelling approach is set up to perform parametric studies in order to better understand how the lamb wave propagation in PSC beam is affected by changing in the PSF level. Results from acoustic-elastic measurement on prestressed beam are presented, showing the feasibility of the lamb wave for PSF evaluation in PSC bridges.