864 resultados para Scenarios of foldin
Resumo:
Doug Hargreaves has completed a year as President of Engineers Australia, a 90,000 strong membership based organisation representing the engineering profession. In preparing for the year Doug decided that the core of his own leadership is his values and that the legacy he wanted to be remembered for at the end of his year, was how his values underpinned everything he did. The framework for this values approach was a book he co-authored entitled 'Values Driven Leadership'. The essence of Doug's philosophy is that a leader who bases their leadership on a strong sense of values will create an environment where people have a strong sense of Belonging, Identity and Purpose. This paper reflects on Doug's year of leadership of Engineers Australia and offers insights and examples of where his values driven leadership approach played out and contributed to various scenarios he encountered over the year. The paper will share Doug's approach to leadership and offer an understanding of how an effective leader actually does what he does. Too often leadership is seen as a nebulous capacity that people either have or do not have. In this paper, we will identify the specific skills and abilities within a values framework that will allow any leader to be more effective in their role.
Resumo:
The interoperable and loosely-coupled web services architecture, while beneficial, can be resource-intensive, and is thus susceptible to denial of service (DoS) attacks in which an attacker can use a relatively insignificant amount of resources to exhaust the computational resources of a web service. We investigate the effectiveness of defending web services from DoS attacks using client puzzles, a cryptographic countermeasure which provides a form of gradual authentication by requiring the client to solve some computationally difficult problems before access is granted. In particular, we describe a mechanism for integrating a hash-based puzzle into existing web services frameworks and analyze the effectiveness of the countermeasure using a variety of scenarios on a network testbed. Client puzzles are an effective defence against flooding attacks. They can also mitigate certain types of semantic-based attacks, although they may not be the optimal solution.
Resumo:
Background: Heat-related mortality is a matter of great public health concern, especially in the light of climate change. Although many studies have found associations between high temperatures and mortality, more research is needed to project the future impacts of climate change on heat-related mortality. Objectives: We conducted a systematic review of research and methods for projecting future heat-related mortality under climate change scenarios. Data sources and extraction: A literature search was conducted in August 2010, using the electronic databases PubMed, Scopus, ScienceDirect, ProQuest, and Web of Science. The search was limited to peer-reviewed journal articles published in English up to 2010. Data synthesis: The review included 14 studies that fulfilled the inclusion criteria. Most projections showed that climate change would result in a substantial increase in heat-related mortality. Projecting heat-related mortality requires understanding of the historical temperature-mortality relationships, and consideration of the future changes in climate, population and acclimatization. Further research is needed to provide a stronger theoretical framework for projections, including a better understanding of socio-economic development, adaptation strategies, land-use patterns, air pollution and mortality displacement. Conclusions: Scenario-based projection research will meaningfully contribute to assessing and managing the potential impacts of climate change on heat-related mortality.
Resumo:
This paper describes the development and evaluation of a new instrument - the Clinician Suicide Risk Assessment Checklist (CSRAC). The instrument assesses the clinician's competency in three areas: clinical interviewing, assessment of specific suicide risk factors, and formulating a management plan. A draft checklist was constructed by integrating information from 1) literature review 2) expert clinician focus group and 3) consultation with experts. It was utilised in a simulated clinical scenario with clinician trainees and a trained actor in order to test for inter-rater agreement. Agreement was calculated and the checklist was re-drafted with the aim of maximising agreement. A second phase of simulated clinical scenarios was then conducted and inter-rater agreement was calculated for the revised checklist. In the first phase of the study, 18 of 35 items had inadequate inter-rater agreement (60%>), while in the second phase, using the revised version, only 3 of 39 items failed to achieve adequate inter-rater agreement. Further evidence of reliability and validity are required. Continued development of the CSRAC will be necessary before it can be utilised to assess the effectiveness of risk assessment training programs.
Resumo:
Conventional planning and decision making, with its sectoral and territorial emphasis and flat-map based processes are no longer adequate or appropriate for the increased complexity confronting airport/city interfaces. These crowed and often contested governance spaces demand a more iterative and relational planning and decision-making approach. Emergent GIS based planning and decision-making tools provide a mechanism which integrate and visually display an array of complex data, frameworks and scenarios/expectations, often in ‘real time’ computations. In so doing, these mechanisms provide a common ground for decision making and facilitate a more ‘joined-up’ approach to airport/city planning. This paper analyses the contribution of the Airport Metropolis Planning Support System (PSS) to sub-regional planning in the Brisbane Airport case environment.
Resumo:
How do humans respond to their social context? This question is becoming increasingly urgent in a society where democracy requires that the citizens of a country help to decide upon its policy directions, and yet those citizens frequently have very little knowledge of the complex issues that these policies seek to address. Frequently, we find that humans make their decisions more with reference to their social setting, than to the arguments of scientists, academics, and policy makers. It is broadly anticipated that the agent based modelling (ABM) of human behaviour will make it possible to treat such social effects, but we take the position here that a more sophisticated treatment of context will be required in many such models. While notions such as historical context (where the past history of an agent might affect its later actions) and situational context (where the agent will choose a different action in a different situation) abound in ABM scenarios, we will discuss a case of a potentially changing context, where social effects can have a strong influence upon the perceptions of a group of subjects. In particular, we shall discuss a recently reported case where a biased worm in an election debate led to significant distortions in the reports given by participants as to who won the debate (Davis et al 2011). Thus, participants in a different social context drew different conclusions about the perceived winner of the same debate, with associated significant differences among the two groups as to who they would vote for in the coming election. We extend this example to the problem of modelling the likely electoral responses of agents in the context of the climate change debate, and discuss the notion of interference between related questions that might be asked of an agent in a social simulation that was intended to simulate their likely responses. A modelling technology which could account for such strong social contextual effects would benefit regulatory bodies which need to navigate between multiple interests and concerns, and we shall present one viable avenue for constructing such a technology. A geometric approach will be presented, where the internal state of an agent is represented in a vector space, and their social context is naturally modelled as a set of basis states that are chosen with reference to the problem space.
Resumo:
Taxes are an important component of investing that is commonly overlooked in both the literature and in practice. For example, many understand that taxes will reduce an investment’s return, but less understood is the risk-sharing nature of taxes that also reduces the investment’s risk. This thesis examines how taxes affect the optimal asset allocation and asset location decision in an Australian environment. It advances the model of Horan & Al Zaman (2008), improving the method by which the present value of tax liabilities are calculated, by using an after-tax risk-free discount rate, and incorporating any new or reduced tax liabilities generated into its expected risk and return estimates. The asset allocation problem is examined for a range of different scenarios using Australian parameters, including different risk aversion levels, personal marginal tax rates, investment horizons, borrowing premiums, high or low inflation environments, and different starting cost bases. The findings support the Horan & Al Zaman (2008) conclusion that equities should be held in the taxable account. In fact, these findings are strengthened with most of the efficient frontier maximising equity holdings in the taxable account instead of only half. Furthermore, these findings transfer to the Australian case, where it is found that taxed Australian investors should always invest into equities first through the taxable account before investing in super. However, untaxed Australian investors should invest their equity first through superannuation. With borrowings allowed in the taxable account (no borrowing premium), Australian taxed investors should hold 100% of the superannuation account in the risk-free asset, while undertaking leverage in the taxable account to achieve the desired risk-return. Introducing a borrowing premium decreases the likelihood of holding 100% of super in the risk-free asset for taxable investors. The findings also suggest that the higher the marginal tax rate, the higher the borrowing premium in order to overcome this effect. Finally, as the investor’s marginal tax rate increases, the overall allocation to equities should increase due to the increased risk and return sharing caused by taxation, and in order to achieve the same risk/return level as the lower taxation level, the investor must take on more equity exposure. The investment horizon has a minimal impact on the optimal allocation decision in the absence of factors such as mean reversion and human capital.
Resumo:
The research objectives of this thesis were to contribute to Bayesian statistical methodology by contributing to risk assessment statistical methodology, and to spatial and spatio-temporal methodology, by modelling error structures using complex hierarchical models. Specifically, I hoped to consider two applied areas, and use these applications as a springboard for developing new statistical methods as well as undertaking analyses which might give answers to particular applied questions. Thus, this thesis considers a series of models, firstly in the context of risk assessments for recycled water, and secondly in the context of water usage by crops. The research objective was to model error structures using hierarchical models in two problems, namely risk assessment analyses for wastewater, and secondly, in a four dimensional dataset, assessing differences between cropping systems over time and over three spatial dimensions. The aim was to use the simplicity and insight afforded by Bayesian networks to develop appropriate models for risk scenarios, and again to use Bayesian hierarchical models to explore the necessarily complex modelling of four dimensional agricultural data. The specific objectives of the research were to develop a method for the calculation of credible intervals for the point estimates of Bayesian networks; to develop a model structure to incorporate all the experimental uncertainty associated with various constants thereby allowing the calculation of more credible credible intervals for a risk assessment; to model a single day’s data from the agricultural dataset which satisfactorily captured the complexities of the data; to build a model for several days’ data, in order to consider how the full data might be modelled; and finally to build a model for the full four dimensional dataset and to consider the timevarying nature of the contrast of interest, having satisfactorily accounted for possible spatial and temporal autocorrelations. This work forms five papers, two of which have been published, with two submitted, and the final paper still in draft. The first two objectives were met by recasting the risk assessments as directed, acyclic graphs (DAGs). In the first case, we elicited uncertainty for the conditional probabilities needed by the Bayesian net, incorporated these into a corresponding DAG, and used Markov chain Monte Carlo (MCMC) to find credible intervals, for all the scenarios and outcomes of interest. In the second case, we incorporated the experimental data underlying the risk assessment constants into the DAG, and also treated some of that data as needing to be modelled as an ‘errors-invariables’ problem [Fuller, 1987]. This illustrated a simple method for the incorporation of experimental error into risk assessments. In considering one day of the three-dimensional agricultural data, it became clear that geostatistical models or conditional autoregressive (CAR) models over the three dimensions were not the best way to approach the data. Instead CAR models are used with neighbours only in the same depth layer. This gave flexibility to the model, allowing both the spatially structured and non-structured variances to differ at all depths. We call this model the CAR layered model. Given the experimental design, the fixed part of the model could have been modelled as a set of means by treatment and by depth, but doing so allows little insight into how the treatment effects vary with depth. Hence, a number of essentially non-parametric approaches were taken to see the effects of depth on treatment, with the model of choice incorporating an errors-in-variables approach for depth in addition to a non-parametric smooth. The statistical contribution here was the introduction of the CAR layered model, the applied contribution the analysis of moisture over depth and estimation of the contrast of interest together with its credible intervals. These models were fitted using WinBUGS [Lunn et al., 2000]. The work in the fifth paper deals with the fact that with large datasets, the use of WinBUGS becomes more problematic because of its highly correlated term by term updating. In this work, we introduce a Gibbs sampler with block updating for the CAR layered model. The Gibbs sampler was implemented by Chris Strickland using pyMCMC [Strickland, 2010]. This framework is then used to consider five days data, and we show that moisture in the soil for all the various treatments reaches levels particular to each treatment at a depth of 200 cm and thereafter stays constant, albeit with increasing variances with depth. In an analysis across three spatial dimensions and across time, there are many interactions of time and the spatial dimensions to be considered. Hence, we chose to use a daily model and to repeat the analysis at all time points, effectively creating an interaction model of time by the daily model. Such an approach allows great flexibility. However, this approach does not allow insight into the way in which the parameter of interest varies over time. Hence, a two-stage approach was also used, with estimates from the first-stage being analysed as a set of time series. We see this spatio-temporal interaction model as being a useful approach to data measured across three spatial dimensions and time, since it does not assume additivity of the random spatial or temporal effects.
Resumo:
An SEI metapopulation model is developed for the spread of an infectious agent by migration. The model portrays two age classes on a number of patches connected by migration routes which are used as host animals mature. A feature of this model is that the basic reproduction ratio may be computed directly, using a scheme that separates topography, demography, and epidemiology. We also provide formulas for individual patch basic reproduction numbers and discuss their connection with the basic reproduction ratio for the system. The model is applied to the problem of spatial spread of bovine tuberculosis in a possum population. The temporal dynamics of infection are investigated for some generic networks of migration links, and the basic reproduction ratio is computed—its value is not greatly different from that for a homogeneous model. Three scenarios are considered for the control of bovine tuberculosis in possums where the spatial aspect is shown to be crucial for the design of disease management operations
Resumo:
Windows are one of the most significant elements in the design of buildings. Whether there are small punched openings in the facade or a completely glazed curtain wall, windows are usually a dominant feature of the building's exterior appearance. From the energy use perspective, windows may also be regarded as thermal holes for a building. Therefore, window design and selection must take both aesthetics and serviceability into consideration. In this paper, using building computer simulation techniques, the effects of glass types on the thermal and energy performance of a sample air-conditioned office building in Australia are studied. It is found that a glass type with lower shading coefficient will have a lower building cooling load and total energy use. Through the comparison of results between current and future weather scenarios, it is identified that the pattern found from the current weather scenario would also exist in the future weather scenario, although the scale of change would become smaller. The possible implication of glazing selection in face of global warming is also examined. It is found that compared with its influence on building thermal performance, its influence on the building energy use is relatively small or insignificant.
Resumo:
Due to their large surface area, complex chemical composition and high alveolar deposition rate, ultrafine particles (UFPs) (< 0.1 ìm) pose a significant risk to human health and their toxicological effects have been acknowledged by the World Health Organisation. Since people spend most of their time indoors, there is a growing concern about the UFPs present in some indoor environments. Recent studies have shown that office machines, in particular laser printers, are a significant indoor source of UFPs. The majority of printer-generated UFPs are organic carbon and it is unlikely that these particles are emitted directly from the printer or its supplies (such as paper and toner powder). Thus, it was hypothesised that these UFPs are secondary organic aerosols (SOA). Considering the widespread use of printers and human exposure to these particles, understanding the processes involved in particle formation is of critical importance. However, few studies have investigated the nature (e.g. volatility, hygroscopicity, composition, size distribution and mixing state) and formation mechanisms of these particles. In order to address this gap in scientific knowledge, a comprehensive study including state-of-art instrumental methods was conducted to characterise the real-time emissions from modern commercial laser printers, including particles, volatile organic compounds (VOCs) and ozone (O3). The morphology, elemental composition, volatility and hygroscopicity of generated particles were also examined. The large set of experimental results was analysed and interpreted to provide insight into: (1) Emissions profiles of laser printers: The results showed that UFPs dominated the number concentrations of generated particles, with a quasi unimodal size distribution observed for all tests. These particles were volatile, non-hygroscopic and mixed both externally and internally. Particle microanalysis indicated that semi-volatile organic compounds occupied the dominant fraction of these particles, with only trace quantities of particles containing Ca and Fe. Furthermore, almost all laser printers tested in this study emitted measurable concentrations of VOCs and O3. A positive correlation between submicron particles and O3 concentrations, as well as a contrasting negative correlation between submicron particles and total VOC concentrations were observed during printing for all tests. These results proved that UFPs generated from laser printers are mainly SOAs. (2) Sources and precursors of generated particles: In order to identify the possible particle sources, particle formation potentials of both the printer components (e.g. fuser roller and lubricant oil) and supplies (e.g. paper and toner powder) were investigated using furnace tests. The VOCs emitted during the experiments were sampled and identified to provide information about particle precursors. The results suggested that all of the tested materials had the potential to generate particles upon heating. Nine unsaturated VOCs were identified from the emissions produced by paper and toner, which may contribute to the formation of UFPs through oxidation reactions with ozone. (3) Factors influencing the particle emission: The factors influencing particle emissions were also investigated by comparing two popular laser printers, one showing particle emissions three orders of magnitude higher than the other. The effects of toner coverage, printing history, type of paper and toner, and working temperature of the fuser roller on particle number emissions were examined. The results showed that the temperature of the fuser roller was a key factor driving the emission of particles. Based on the results for 30 different types of laser printers, a systematic positive correlation was observed between temperature and particle number emissions for printers that used the same heating technology and had a similar structure and fuser material. It was also found that temperature fluctuations were associated with intense bursts of particles and therefore, they may have impact on the particle emissions. Furthermore, the results indicated that the type of paper and toner powder contributed to particle emissions, while no apparent relationship was observed between toner coverage and levels of submicron particles. (4) Mechanisms of SOA formation, growth and ageing: The overall hypothesis that UFPs are formed by reactions with the VOCs and O3 emitted from laser printers was examined. The results proved this hypothesis and suggested that O3 may also play a role in particle ageing. In addition, knowledge about the mixing state of generated particles was utilised to explore the detailed processes of particle formation for different printing scenarios, including warm-up, normal printing, and printing without toner. The results indicated that polymerisation may have occurred on the surface of the generated particles to produce thermoplastic polymers, which may account for the expandable characteristics of some particles. Furthermore, toner and other particle residues on the idling belt from previous print jobs were a very clear contributing factor in the formation of laser printer-emitted particles. In summary, this study not only improves scientific understanding of the nature of printer-generated particles, but also provides significant insight into the formation and ageing mechanisms of SOAs in the indoor environment. The outcomes will also be beneficial to governments, industry and individuals.
Resumo:
This paper describes a vision-based airborne collision avoidance system developed by the Australian Research Centre for Aerospace Automation (ARCAA) under its Dynamic Sense-and-Act (DSA) program. We outline the system architecture and the flight testing undertaken to validate the system performance under realistic collision course scenarios. The proposed system could be implemented in either manned or unmanned aircraft, and represents a step forward in the development of a “sense-and-avoid” capability equivalent to human “see-and-avoid”.
Resumo:
The purpose of this paper is to advance our understanding of what contextual factors influence the service bundling process in an organizational setting. Although previous literature contains insights into the mechanisms underlying bundling and the artefacts for performing the bundling task itself, the body of knowledge seems to lack a comprehensive framework for analysing the actual scenario in which the bundling process is performed. This is required as the scenario will influence the bundling method and the IT support. We address this need by designing a morphological box for analysing bundling scenarios in different organizational settings. The factors featured in the box are systematised into a set of four categories of bundling layers which we identify from reviewing literature. The two core layers in the framework are the service bundling on a type level and on an instance level (i.e. configuration). To demonstrate the applicability and utility of the proposed morphological box, we apply it to assess the underlying differences and commonalities of two different bundling scenarios from the B2B and G2C sectors which stress the differences between bundling on a type and instance level. In addition, we identify several prospects for future research that can benefit from the proposed morphological box.
Resumo:
In the long term, with development of skill, knowledge, exposure and confidence within the engineering profession, rigorous analysis techniques have the potential to become a reliable and far more comprehensive method for design and verification of the structural adequacy of OPS, write Nimal J Perera, David P Thambiratnam and Brian Clark. This paper explores the potential to enhance operator safety of self-propelled mechanical plant subjected to roll over and impact of falling objects using the non-linear and dynamic response simulation capabilities of analytical processes to supplement quasi-static testing methods prescribed in International and Australian Codes of Practice for bolt on Operator Protection Systems (OPS) that are post fitted. The paper is based on research work carried out by the authors at the Queensland University of Technology (QUT) over a period of three years by instrumentation of prototype tests, scale model tests in the laboratory and rigorous analysis using validated Finite Element (FE) Models. The FE codes used were ABAQUS for implicit analysis and LSDYNA for explicit analysis. The rigorous analysis and dynamic simulation technique described in the paper can be used to investigate the structural response due to accident scenarios such as multiple roll over, impact of multiple objects and combinations of such events and thereby enhance the safety and performance of Roll Over and Falling Object Protection Systems (ROPS and FOPS). The analytical techniques are based on sound engineering principles and well established practice for investigation of dynamic impact on all self propelled vehicles. They are used for many other similar applications where experimental techniques are not feasible.
Resumo:
This thesis presents a design investigation into how traditional technology-orientated markets can use design led innovation (DLI) strategies in order to achieve better market penetration of disruptive products. In a review of the Australian livestock industry, considering historical information and present-day trends, a lack of socio-cultural consideration was identified in the design and implementation of products and systems, previously been taken to market. Hence the adoption of these novel products has been documented as extremely slow. Classical diffusion models have typically been used in order to implement these products. However, this thesis poses that it is through the strategic intent of design led innovation, where heavily technology-orientated markets (such as the Australian livestock industry), can achieve better final adoption rates. By considering a range of external factors (business models, technology and user needs), rather than focusing design efforts solely on the technology, it is argued that using DLI approach will lead to disruptive innovations being made easier to adopt in the Australian livestock industry. This thesis therefore explored two research questions: 1. What are the social inhibitors to the adoption of a new technology in the Australian livestock industry? 2. Can design be used to gain a significant feedback response to the proposed innovation? In order to answer these questions, this thesis used a design led innovation approach to investigate the livestock industry, centring on how design can be used early on in the development of disruptive products being taken to market. This thesis used a three stage data collection programme, combining methods of design thinking, co-design and participatory design. The first study found four key themes to the social barriers of technology adoption; Social attitudes to innovation, Market monitoring, Attitude to 3D imaging and Online processes. These themes were built upon through a design thinking/co-design approach to create three ‘future scenarios’ to be tested in participant workshops. The analysis of the data collection found four key socio-cultural barriers that inhibited the adoption of a disruptive innovation in the Australian livestock industry. These were found to be a lack of Education, a Culture of Innovation, a Lack of Engagement and Communication barriers. This thesis recommends five key areas to be focused upon in the subsequent design of a new product in the Australian livestock industry. These recommendations are made to business and design managers looking to introduce disruptive innovations in this industry. Moreover, the thesis presents three design implications relating to stakeholder attitudes, practical constraints and technological restrictions of innovations within the industry.