718 resultados para choice modelling
Resumo:
Physical and chemical properties of biofuel are influenced by structural features of fatty acid such as chain length, degree of unsaturation and branching of the chain. A simple and reliable calculation method to estimate fuel property is therefore needed to avoid experimental testing which is difficult, costly and time consuming. Typically in commercial biodiesel production such testing is done for every batch of fuel produced. In this study 9 different algae species were selected that were likely to be suitable for subtropical climates. The fatty acid methyl esters (FAMEs) of all algae species were analysed and the fuel properties like cetane number (CN), cold filter plugging point (CFPP), kinematic viscosity (KV), density and higher heating value (HHV) were determined. The relation of each fatty acid with particular fuel property is analysed using multivariate and multi-criteria decision method (MCDM) software. They showed that some fatty acids have major influences on the fuel properties whereas others have minimal influence. Based on the fuel properties and amounts of lipid content rank order is drawn by PROMETHEE-GAIA which helped to select the best algae species for biodiesel production in subtropical climates. Three species had fatty acid profiles that gave the best fuel properties although only one of these (Nannochloropsis oculata) is considered the best choice because of its higher lipid content.
Resumo:
Knowledge management (KM) continues to receive mounting interest within the construction industry due to its potential to offer solutions for organisations seeking competitive advantage. This paper presents a KM input-process-output conceptual model comprising unique and well-defined theoretical constructs representing KM practices and their internal and external determinants in the context of construction. The paper also presents the underlying principles used in operationally defining each construct using extant KM literature, and offers a number of testable hypotheses that capture the inter-relationships between the identified constructs.
Resumo:
As the world’s population is growing, so is the demand for agricultural products. However, natural nitrogen (N) fixation and phosphorus (P) availability cannot sustain the rising agricultural production, thus, the application of N and P fertilisers as additional nutrient sources is common. It is those anthropogenic activities that can contribute high amounts of organic and inorganic nutrients to both surface and groundwaters resulting in degradation of water quality and a possible reduction of aquatic life. In addition, runoff and sewage from urban and residential areas can contain high amounts of inorganic and organic nutrients which may also affect water quality. For example, blooms of the cyanobacterium Lyngbya majuscula along the coastline of southeast Queensland are an indicator of at least short term decreases of water quality. Although Australian catchments, including those with intensive forms of land use, show in general a low export of nutrients compared to North American and European catchments, certain land use practices may still have a detrimental effect on the coastal environment. Numerous studies are reported on nutrient cycling and associated processes on a catchment scale in the Northern Hemisphere. Comparable studies in Australia, in particular in subtropical regions are, however, limited and there is a paucity in the data, in particular for inorganic and organic forms of nitrogen and phosphorus; these nutrients are important limiting factors in surface waters to promote algal blooms. Therefore, the monitoring of N and P and understanding the sources and pathways of these nutrients within a catchment is important in coastal zone management. Although Australia is the driest continent, in subtropical regions such as southeast Queensland, rainfall patterns have a significant effect on runoff and thus the nutrient cycle at a catchment scale. Increasingly, these rainfall patterns are becoming variable. The monitoring of these climatic conditions and the hydrological response of agricultural catchments is therefore also important to reduce the anthropogenic effects on surface and groundwater quality. This study consists of an integrated hydrological–hydrochemical approach that assesses N and P in an environment with multiple land uses. The main aim is to determine the nutrient cycle within a representative coastal catchment in southeast Queensland, the Elimbah Creek catchment. In particular, the investigation confirms the influence associated with forestry and agriculture on N and P forms, sources, distribution and fate in the surface and groundwaters of this subtropical setting. In addition, the study determines whether N and P are subject to transport into the adjacent estuary and thus into the marine environment; also considered is the effect of local topography, soils and geology on N and P sources and distribution. The thesis is structured on four components individually reported. The first paper determines the controls of catchment settings and processes on stream water, riverbank sediment, and shallow groundwater N and P concentrations, in particular during the extended dry conditions that were encountered during the study. Temporal and spatial factors such as seasonal changes, soil character, land use and catchment morphology are considered as well as their effect on controls over distributions of N and P in surface waters and associated groundwater. A total number of 30 surface and 13 shallow groundwater sampling sites were established throughout the catchment to represent dominant soil types and the land use upstream of each sampling location. Sampling comprises five rounds and was conducted over one year between October 2008 and November 2009. Surface water and groundwater samples were analysed for all major dissolved inorganic forms of N and for total N. Phosphorus was determined in the form of dissolved reactive P (predominantly orthophosphate) and total P. In addition, extracts of stream bank sediments and soil grab samples were analysed for these N and P species. Findings show that major storm events, in particular after long periods of drought conditions, are the driving force of N cycling. This is expressed by higher inorganic N concentrations in the agricultural subcatchment compared to the forested subcatchment. Nitrate N is the dominant inorganic form of N in both the surface and groundwaters and values are significantly higher in the groundwaters. Concentrations in the surface water range from 0.03 to 0.34 mg N L..1; organic N concentrations are considerably higher (average range: 0.33 to 0.85 mg N L..1), in particular in the forested subcatchment. Average NO3-N in the groundwater has a range of 0.39 to 2.08 mg N L..1, and organic N averages between 0.07 and 0.3 mg N L..1. The stream bank sediments are dominated by organic N (range: 0.53 to 0.65 mg N L..1), and the dominant inorganic form of N is NH4-N with values ranging between 0.38 and 0.41 mg N L..1. Topography and soils, however, were not to have a significant effect on N and P concentrations in waters. Detectable phosphorus in the surface and groundwaters of the catchment is limited to several locations typically in the proximity of areas with intensive animal use; in soil and sediments, P is negligible. In the second paper, the stable isotopes of N (14N/15N) and H2O (16O/18O and 2H/H) in surface and groundwaters are used to identify sources of dissolved inorganic and organic N in these waters, and to determine their pathways within the catchment; specific emphasis is placed on the relation of forestry and agriculture. Forestry is predominantly concentrated in the northern subcatchment (Beerburrum Creek) while agriculture is mainly found in the southern subcatchment (Six Mile Creek). Results show that agriculture (horticulture, crops, grazing) is the main source of inorganic N in the surface waters of the agricultural subcatchment, and their isotopic signature shows a close link to evaporation processes that may occur during water storage in farm dams that are used for irrigation. Groundwaters are subject to denitrification processes that may result in reduced dissolved inorganic N concentrations. Soil organic matter delivers most of the inorganic N to the surface water in the forested subcatchment. Here, precipitation and subsequently runoff is the main source of the surface waters. Groundwater in this area is affected by agricultural processes. The findings also show that the catchment can attenuate the effects of anthropogenic land use on surface water quality. Riparian strips of natural remnant vegetation, commonly 50 to 100 m in width, act as buffer zones along the drainage lines in the catchment and remove inorganic N from the soil water before it enters the creek. These riparian buffer zones are common in most agricultural catchments of southeast Queensland and are indicated to reduce the impact of agriculture on stream water quality and subsequently on the estuary and marine environments. This reduction is expressed by a significant decrease in DIN concentrations from 1.6 mg N L..1 to 0.09 mg N L..1, and a decrease in the �15N signatures from upstream surface water locations downstream to the outlet of the agricultural subcatchment. Further testing is, however, necessary to confirm these processes. Most importantly, the amount of N that is transported to the adjacent estuary is shown to be negligible. The third and fourth components of the thesis use a hydrological catchment model approach to determine the water balance of the Elimbah Creek catchment. The model is then used to simulate the effects of land use on the water balance and nutrient loads of the study area. The tool that is used is the internationally widely applied Soil and Water Assessment Tool (SWAT). Knowledge about the water cycle of a catchment is imperative in nutrient studies as processes such as rainfall, surface runoff, soil infiltration and routing of water through the drainage system are the driving forces of the catchment nutrient cycle. Long-term information about discharge volumes of the creeks and rivers do, however, not exist for a number of agricultural catchments in southeast Queensland, and such information is necessary to calibrate and validate numerical models. Therefore, a two-step modelling approach was used to calibrate and validate parameters values from a near-by gauged reference catchment as starting values for the ungauged Elimbah Creek catchment. Transposing monthly calibrated and validated parameter values from the reference catchment to the ungauged catchment significantly improved model performance showing that the hydrological model of the catchment of interest is a strong predictor of the water water balance. The model efficiency coefficient EF shows that 94% of the simulated discharge matches the observed flow whereas only 54% of the observed streamflow was simulated by the SWAT model prior to using the validated values from the reference catchment. In addition, the hydrological model confirmed that total surface runoff contributes the majority of flow to the surface water in the catchment (65%). Only a small proportion of the water in the creek is contributed by total base-flow (35%). This finding supports the results of the stable isotopes 16O/18O and 2H/H, which show the main source of water in the creeks is either from local precipitation or irrigation waters delivered by surface runoff; a contribution from the groundwater (baseflow) to the creeks could not be identified using 16O/18O and 2H/H. In addition, the SWAT model calculated that around 68% of the rainfall occurring in the catchment is lost through evapotranspiration reflecting the prevailing long-term drought conditions that were observed prior and during the study. Stream discharge from the forested subcatchment was an order of magnitude lower than discharge from the agricultural Six Mile Creek subcatchment. A change in land use from forestry to agriculture did not significantly change the catchment water balance, however, nutrient loads increased considerably. Conversely, a simulated change from agriculture to forestry resulted in a significant decrease of nitrogen loads. The findings of the thesis and the approach used are shown to be of value to catchment water quality monitoring on a wider scale, in particular the implications of mixed land use on nutrient forms, distributions and concentrations. The study confirms that in the tropics and subtropics the water balance is affected by extended dry periods and seasonal rainfall with intensive storm events. In particular, the comprehensive data set of inorganic and organic N and P forms in the surface and groundwaters of this subtropical setting acquired during the one year sampling program may be used in similar catchment hydrological studies where these detailed information is missing. Also, the study concludes that riparian buffer zones along the catchment drainage system attenuate the transport of nitrogen from agricultural sources in the surface water. Concentrations of N decreased from upstream to downstream locations and were negligible at the outlet of the catchment.
Resumo:
The 2011 floods in Southeast Queensland had a devastating impact on many sectors including transport. Road and rail systems across all flooded areas of Queensland were severely affected and significant economic losses occurred as a result of roadway and railway closures. Travellers were compelled to take alternative routes because of road closures or deteriorated traffic conditions on their regular route. Extreme changes in traffic volume can occur under such scenarios which disrupts the network re-equilibrium and re-stabilisation in the recovery phase as travellers continuously adjust their travel options. This study explores how travellers respond to such a major network disruption. A comprehensive study was undertaken focusing on how bus riders reacted to the floods in Southeast Queensland by comparing the ridership patterns before, during and after the floods. The study outcomes revealed the evolving reactions of transit users to direct and indirect impacts of a natural disaster. A good understanding of this process is crucial for developing appropriate strategies to encourage modal shift of automobile users to public transit and also for modelling of travel behaviours during and after a major network disruption caused by natural disasters.
Resumo:
Meanings and descriptions of menopause have shifted focus over the past century and a half; more particularly the past sixty years has seen a shift from descriptions of hormone decline and its relation to ageing, femininity and symptoms of menopause since the 1960's to the possibility for preventive medicine afforded by menopause. Medicine is not a static field in its construction of menopause. It has changed, not least by its engagement (positively or negatively) with critique from both within (epidemiological) and without (feminist and social sciences). In this review we identify three recent changes: (1) Increasing concern with women's decision-making. (2) The emergence from within medicine of the rejection of the use of language which defines menopause as a condition of deficiency. (3) New insights from postmodern and poststructural analyses of menopause that examine the epistemological foundations of medical and feminist concepts of menopause and contest fixed descriptions of the experience of menopause. Key aspects of a ‘medical menopause’ nevertheless remain constant: menopause is a loss of hormones that results in predictable effects and risks and may be ameliorated by hormone replacement therapy. A question therefore emerges about how and to what effect medical practitioners have engaged with critiques of the medical menopause?
Resumo:
Over the past two decades medical researchers and modernist feminist researchers have contested the meaning of menopause. In this article we examine various meanings of menopause in major medical and feminist literature and the construction of menopause in a semi-structured interview study of general practitioners in rural South Australia. Three discursive themes are identified in these interviews; (i) the hormonal menopause – symptoms, risk, prevention; (ii) the informed menopausal woman; and (iii) decision-making and hormone replacement therapy. By using the discourse of prevention, general practitioners construct menopause in relation to women's health care choices, empowerment and autonomy. We argue that the ways in which these concepts are deployed by general practitioners in this study produces and constrains the options available to women. The implications of these general practitioner accounts are discussed in relation to the proposition that medical and feminist descriptions of menopause posit alternative but equally-fixed truths about menopause and their relationship with the range of responses available to women at menopause. Social and cultural explanations of disease causality (c.f.Germov 1998, Hardey 1998) are absent from the new menopause despite their being an integral part of the framework of the women's health movement and health promotion drawn on by these general practitioners. Further, the shift of responsibility for health to the individual woman reinforces practice claims to empower women, but oversimplifies power relations and constructs menopause as a site of self-surveillance. The use of concepts from the women's health movement and health promotion have nevertheless created change in both the positioning of women as having ‘choices’ and the positioning of some general practitioners in terms of greater information provision to women and an attention to the woman's autonomy. In conclusion, we propose that a new menopause has evolved from a discursive shift in medicine and that there exists within this new configuration, claiming the empowerment of women as an integral part of health care for menopause, the possibility for change in medical practice which will broaden, strengthen, and maintain this position.
Resumo:
Parametric and generative modelling methods are ways in which computer models are made more flexible, and of formalising domain-specific knowledge. At present, no open standard exists for the interchange of parametric and generative information. The Industry Foundation Classes (IFC) which are an open standard for interoperability in building information models is presented as the base for an open standard in parametric modelling. The advantage of allowing parametric and generative representations are that the early design process can allow for more iteration and changes can be implemented quicker than with traditional models. This paper begins with a formal definition of what constitutes to be parametric and generative modelling methods and then proceeds to describe an open standard in which the interchange of components could be implemented. As an illustrative example of generative design, Frazer’s ‘Reptiles’ project from 1968 is reinterpreted.
Resumo:
We study a political economy model which aims to understand the diversity in the growth and technology-adoption experiences in different economies. In this model the cost of technology adoption is endogenous and varies across heterogeneous agents. Agents in the model vote on the proportion of revenues allocated towards such expenditures. In the early stages of development, the political-economy outcome of the model ensures that a sub-optimal proportion of government revenue is used to finance adoption-cost reducing expenditures. This sub-optimality is due to the presence of inequality; agents at the lower end of the distribution favor a larger amount of revenue allocated towards redistribution in the form of lump-sum transfers. Eventually all individuals make the switch to the better technology and their incomes converge. The outcomes of the model therefore explain why public choice is more likely to be conservative in nature; it represents the majority choice given conflicting preferences among agents. Consequently, the transition path towards growth and technology adoption varies across countries depending on initial levels of inequality.
Resumo:
AR process modelling movie presented at Gartner BPM Summit in Sydney, August, 2011.
Resumo:
Video presented as part of BPM2011 demonstration(France). In this video we show a prototype BPMN process modelling tool which uses Augmented Reality techniques to increase the sense of immersion when editing a process model. The avatar represents a remotely logged in user, and facilitates greater insight into the editing actions of the collaborator than present 2D web-based approaches in collaborative process modelling. We modified the Second Life client to integrate the ARToolkit in order to support pattern-based AR.
Resumo:
Situated on Youtube, and shown in various locations. A video showing members of the QUT BPM research group using a Mimio pen-based tabletop system for collaborative process modelling.
Resumo:
Controlled drug delivery is a key topic in modern pharmacotherapy, where controlled drug delivery devices are required to prolong the period of release, maintain a constant release rate, or release the drug with a predetermined release profile. In the pharmaceutical industry, the development process of a controlled drug delivery device may be facilitated enormously by the mathematical modelling of drug release mechanisms, directly decreasing the number of necessary experiments. Such mathematical modelling is difficult because several mechanisms are involved during the drug release process. The main drug release mechanisms of a controlled release device are based on the device’s physiochemical properties, and include diffusion, swelling and erosion. In this thesis, four controlled drug delivery models are investigated. These four models selectively involve the solvent penetration into the polymeric device, the swelling of the polymer, the polymer erosion and the drug diffusion out of the device but all share two common key features. The first is that the solvent penetration into the polymer causes the transition of the polymer from a glassy state into a rubbery state. The interface between the two states of the polymer is modelled as a moving boundary and the speed of this interface is governed by a kinetic law. The second feature is that drug diffusion only happens in the rubbery region of the polymer, with a nonlinear diffusion coefficient which is dependent on the concentration of solvent. These models are analysed by using both formal asymptotics and numerical computation, where front-fixing methods and the method of lines with finite difference approximations are used to solve these models numerically. This numerical scheme is conservative, accurate and easily implemented to the moving boundary problems and is thoroughly explained in Section 3.2. From the small time asymptotic analysis in Sections 5.3.1, 6.3.1 and 7.2.1, these models exhibit the non-Fickian behaviour referred to as Case II diffusion, and an initial constant rate of drug release which is appealing to the pharmaceutical industry because this indicates zeroorder release. The numerical results of the models qualitatively confirms the experimental behaviour identified in the literature. The knowledge obtained from investigating these models can help to develop more complex multi-layered drug delivery devices in order to achieve sophisticated drug release profiles. A multi-layer matrix tablet, which consists of a number of polymer layers designed to provide sustainable and constant drug release or bimodal drug release, is also discussed in this research. The moving boundary problem describing the solvent penetration into the polymer also arises in melting and freezing problems which have been modelled as the classical onephase Stefan problem. The classical one-phase Stefan problem has unrealistic singularities existed in the problem at the complete melting time. Hence we investigate the effect of including the kinetic undercooling to the melting problem and this problem is called the one-phase Stefan problem with kinetic undercooling. Interestingly we discover the unrealistic singularities existed in the classical one-phase Stefan problem at the complete melting time are regularised and also find out the small time behaviour of the one-phase Stefan problem with kinetic undercooling is different to the classical one-phase Stefan problem from the small time asymptotic analysis in Section 3.3. In the case of melting very small particles, it is known that surface tension effects are important. The effect of including the surface tension to the melting problem for nanoparticles (no kinetic undercooling) has been investigated in the past, however the one-phase Stefan problem with surface tension exhibits finite-time blow-up. Therefore we investigate the effect of including both the surface tension and kinetic undercooling to the melting problem for nanoparticles and find out the the solution continues to exist until complete melting. The investigation of including kinetic undercooling and surface tension to the melting problems reveals more insight into the regularisations of unphysical singularities in the classical one-phase Stefan problem. This investigation gives a better understanding of melting a particle, and contributes to the current body of knowledge related to melting and freezing due to heat conduction.
Resumo:
Stigmergy is a biological term originally used when discussing insect or swarm behaviour, and describes a model supporting environment-based communication separating artefacts from agents. This phenomenon is demonstrated in the behavior of ants and their food foraging supported by pheromone trails, or similarly termites and their termite nest building process. What is interesting with this mechanism is that highly organized societies are formed without an apparent central management function. We see design features in Web sites that mimic stigmergic mechanisms as part of the User Interface and we have created generalizations of these patterns. Software development and Web site development techniques have evolved significantly over the past 20 years. Recent progress in this area proposes languages to model web applications to facilitate the nuances specific to these developments. These modeling languages provide a suitable framework for building reusable components encapsulating our design patterns of stigmergy. We hypothesize that incorporating stigmergy as a separate feature of a site’s primary function will ultimately lead to enhanced user coordination.
Resumo:
Light Gauge Steel Framing (LSF) walls are made of cold-formed, thin-walled steel lipped channel studs with plasterboard linings on both sides. However, these thin-walled steel sections heat up quickly and lose their strength under fire conditions despite the protection provided by plasterboards. A new composite wall panel was recently proposed to improve the fire resistance rating of LSF walls, where an insulation layer was used externally between the plasterboards on both sides of the wall frame instead of using it in the cavity. A research study using both fire tests and numerical studies was undertaken to investigate the structural and thermal behaviour of load bearing LSF walls made of both conventional and the new composite panels under standard fire conditions and to determine their fire resistance rating. This paper presents the details of finite element models of LSF wall studs developed to simulate the structural performance of LSF wall panels under standard fire conditions. Finite element analyses were conducted under both steady and transient state conditions using the time-temperature profiles measured during the fire tests. The developed models were validated using the fire test results of 11 LSF wall panels with various plasterboard/insulation configurations and load ratios. They were able to predict the fire resistance rating within five minutes. The use of accurate numerical models allowed the inclusion of various complex structural and thermal effects such as local buckling, thermal bowing and neutral axis shift that occurred in thin-walled steel studs under non-uniform elevated temperature conditions. Finite element analyses also demonstrated the improvements offered by the new composite panel system over the conventional cavity insulated system.
Resumo:
Deterministic computer simulation of physical experiments is now a common technique in science and engineering. Often, physical experiments are too time consuming, expensive or impossible to conduct. Complex computer models or codes, rather than physical experiments lead to the study of computer experiments, which are used to investigate many scientific phenomena. A computer experiment consists of a number of runs of the computer code with different input choices. The Design and Analysis of Computer Experiments is a rapidly growing technique in statistical experimental design. This paper aims to discuss some practical issues when designing a computer simulation and/or experiments for manufacturing systems. A case study approach is reviewed and presented.