937 resultados para Multiple scale
Resumo:
This thesis is a comparative study of the modelling of mechanical behaviours of F-actin cytoskeleton which is an important structural component in living cells. A new granular model was developed for F-actin cytoskeleton based on the concept of multiscale modelling. This framework overcomes difficulties encountered in physical modelling of cytoskeleton in conventional continuum mechanics modelling, and the computational challenges in all-atom molecular dynamics simulation. The thermostat algorithm was further modified to better predict the thermodynamic properties of F-actin cytoskeleton in modelling. This multiscale modelling framework was applied in explaining the physical mechanisms of cytoskeleton responses to external mechanical loads.
Resumo:
The autonomous capabilities in collaborative unmanned aircraft systems are growing rapidly. Without appropriate transparency, the effectiveness of the future multiple Unmanned Aerial Vehicle (UAV) management paradigm will be significantly limited by the human agent’s cognitive abilities; where the operator’s CognitiveWorkload (CW) and Situation Awareness (SA) will present as disproportionate. This proposes a challenge in evaluating the impact of robot autonomous capability feedback, allowing the human agent greater transparency into the robot’s autonomous status - in a supervisory role. This paper presents; the motivation, aim, related works, experiment theory, methodology, results and discussions, and the future work succeeding this preliminary study. The results in this paper illustrates that, with a greater transparency of a UAV’s autonomous capability, an overall improvement in the subjects’ cognitive abilities was evident, that is, with a confidence of 95%, the test subjects’ mean CW was demonstrated to have a statistically significant reduction, while their mean SA was demonstrated to have a significant increase.
Resumo:
The quality of environmental decisions should be gauged according to managers' objectives. Management objectives generally seek to maximize quantifiable measures of system benefit, for instance population growth rate. Reaching these goals often requires a certain degree of learning about the system. Learning can occur by using management action in combination with a monitoring system. Furthermore, actions can be chosen strategically to obtain specific kinds of information. Formal decision making tools can choose actions to favor such learning in two ways: implicitly via the optimization algorithm that is used when there is a management objective (for instance, when using adaptive management), or explicitly by quantifying knowledge and using it as the fundamental project objective, an approach new to conservation.This paper outlines three conservation project objectives - a pure management objective, a pure learning objective, and an objective that is a weighted mixture of these two. We use eight optimization algorithms to choose actions that meet project objectives and illustrate them in a simulated conservation project. The algorithms provide a taxonomy of decision making tools in conservation management when there is uncertainty surrounding competing models of system function. The algorithms build upon each other such that their differences are highlighted and practitioners may see where their decision making tools can be improved. © 2010 Elsevier Ltd.
Resumo:
Self-gifting consumer behaviour (SGCB) is on the rise as consumers seek reward and therapeutic benefits from their shopping experiences. SGCB is defined as personally symbolic, self-communication through special indulgences, which tend to be premeditated and highly context bound. Prior research into the measurement of this growing behavioural phenomenon has been fragmented because of differences in conceptualisation. This research builds upon the prior literature and through a series of qualitative and quantitative studies, develops a valid, multidimensional measure of SGCB that will be useful for future quantitative inquiry into self-gifting consumption.
Resumo:
Despite being used since 1976, Delusions-Symptoms-States-Inventory/states of Anxiety and Depression (DSSI/sAD) has not yet been validated for use among people with diabetes. The aim of this study was to examine the validity of the personal disturbance scale (DSSI/sAD) among women with diabetes using Mater-University of Queensland Study of Pregnancy (MUSP) cohort data. The DSSI subscales were compared against DSM-IV disorders, the Mental Component Score of the Short Form 36 (SF-36 MCS), and Center for Epidemiologic Studies Depression Scale (CES-D). Factor analyses, odds ratios, receiver operating characteristic (ROC) analyses and diagnostic efficiency tests were used to report findings. Exploratory factor analysis and fit indices confirmed the hypothesized two-factor model of DSSI/sAD. We found significant variations in the DSSI/sAD domain scores that could be explained by CES-D (DSSI-Anxiety: 55%, DSSI-Depression: 46%) and SF-36 MCS (DSSI-Anxiety: 66%, DSSI-Depression: 56%). The DSSI subscales predicted DSM-IV diagnosed depression and anxiety disorders. The ROC analyses show that although the DSSI symptoms and DSM-IV disorders were measured concurrently the estimates of concordance remained only moderate. The findings demonstrate that the DSSI/sAD items have similar relationships to one another in both the diabetes and non-diabetes data sets which therefore suggest that they have similar interpretations.
Resumo:
Water to air methane emissions from freshwater reservoirs can be dominated by sediment bubbling (ebullitive) events. Previous work to quantify methane bubbling from a number of Australian sub-tropical reservoirs has shown that this can contribute as much as 95% of total emissions. These bubbling events are controlled by a variety of different factors including water depth, surface and internal waves, wind seiching, atmospheric pressure changes and water levels changes. Key to quantifying the magnitude of this emission pathway is estimating both the bubbling rate as well as the areal extent of bubbling. Both bubbling rate and areal extent are seldom constant and require persistent monitoring over extended time periods before true estimates can be generated. In this paper we present a novel system for persistent monitoring of both bubbling rate and areal extent using multiple robotic surface chambers and adaptive sampling (grazing) algorithms to automate the quantification process. Individual chambers are self-propelled and guided and communicate between each other without the need for supervised control. They can maintain station at a sampling site for a desired incubation period and continuously monitor, record and report fluxes during the incubation. To exploit the methane sensor detection capabilities, the chamber can be automatically lowered to decrease the head-space and increase concentration. The grazing algorithms assign a hierarchical order to chambers within a preselected zone. Chambers then converge on the individual recording the highest 15 minute bubbling rate. Individuals maintain a specified distance apart from each other during each sampling period before all individuals are then required to move to different locations based on a sampling algorithm (systematic or adaptive) exploiting prior measurements. This system has been field tested on a large-scale subtropical reservoir, Little Nerang Dam, and over monthly timescales. Using this technique, localised bubbling zones on the water storage were found to produce over 50,000 mg m-2 d-1 and the areal extent ranged from 1.8 to 7% of the total reservoir area. The drivers behind these changes as well as lessons learnt from the system implementation are presented. This system exploits relatively cheap materials, sensing and computing and can be applied to a wide variety of aquatic and terrestrial systems.
Resumo:
Player experiences and expectations are connected. The presumptions players have about how they control their gameplay interactions may shape the way they play and perceive videogames. A successfully engaging player experience might rest on the way controllers meet players' expectations. We studied player interaction with novel controllers on the Sony PlayStation Wonderbook, an augmented reality (AR) gaming system. Our goal was to understand player expectations regarding game controllers in AR game design. Based on this preliminary study, we propose several interaction guidelines for hybrid input from both augmented reality and physical game controllers
Resumo:
PURPOSE: This paper describes dynamic agent composition, used to support the development of flexible and extensible large-scale agent-based models (ABMs). This approach was motivated by a need to extend and modify, with ease, an ABM with an underlying networked structure as more information becomes available. Flexibility was also sought after so that simulations are set up with ease, without the need to program. METHODS: The dynamic agent composition approach consists in having agents, whose implementation has been broken into atomic units, come together at runtime to form the complex system representation on which simulations are run. These components capture information at a fine level of detail and provide a vast range of combinations and options for a modeller to create ABMs. RESULTS: A description of the dynamic agent composition is given in this paper, as well as details about its implementation within MODAM (MODular Agent-based Model), a software framework which is applied to the planning of the electricity distribution network. Illustrations of the implementation of the dynamic agent composition are consequently given for that domain throughout the paper. It is however expected that this approach will be beneficial to other problem domains, especially those with a networked structure, such as water or gas networks. CONCLUSIONS: Dynamic agent composition has many advantages over the way agent-based models are traditionally built for the users, the developers, as well as for agent-based modelling as a scientific approach. Developers can extend the model without the need to access or modify previously written code; they can develop groups of entities independently and add them to those already defined to extend the model. Users can mix-and-match already implemented components to form large-scales ABMs, allowing them to quickly setup simulations and easily compare scenarios without the need to program. The dynamic agent composition provides a natural simulation space over which ABMs of networked structures are represented, facilitating their implementation; and verification and validation of models is facilitated by quickly setting up alternative simulations.
Resumo:
Linear assets are engineering infrastructure, such as pipelines, railway lines, and electricity cables, which span long distances and can be divided into different segments. Optimal management of such assets is critical for asset owners as they normally involve significant capital investment. Currently, Time Based Preventive Maintenance (TBPM) strategies are commonly used in industry to improve the reliability of such assets, as they are easy to implement compared with reliability or risk-based preventive maintenance strategies. Linear assets are normally of large scale and thus their preventive maintenance is costly. Their owners and maintainers are always seeking to optimize their TBPM outcomes in terms of minimizing total expected costs over a long term involving multiple maintenance cycles. These costs include repair costs, preventive maintenance costs, and production losses. A TBPM strategy defines when Preventive Maintenance (PM) starts, how frequently the PM is conducted and which segments of a linear asset are operated on in each PM action. A number of factors such as required minimal mission time, customer satisfaction, human resources, and acceptable risk levels need to be considered when planning such a strategy. However, in current practice, TBPM decisions are often made based on decision makers’ expertise or industrial historical practice, and lack a systematic analysis of the effects of these factors. To address this issue, here we investigate the characteristics of TBPM of linear assets, and develop an effective multiple criteria decision making approach for determining an optimal TBPM strategy. We develop a recursive optimization equation which makes it possible to evaluate the effect of different maintenance options for linear assets, such as the best partitioning of the asset into segments and the maintenance cost per segment.
Resumo:
Epigenetic changes correspond to heritable modifications of the chromosome structure, which do not involve alteration of the DNA sequence but do affect gene expression. These mechanisms play an important role in normal cell differentiation, but aberration is associated also with several diseases, including cancer and neural disorders. In consequence, despite intensive studies in recent years, the contribution of modifications remains largely unquantified due to overall system complexity and insufficient data. Computational models can provide powerful auxiliary tools to experimentation, not least as scales from the sub-cellular through cell populations (or to networks of genes) can be spanned. In this paper, the challenges to development, of realistic cross-scale models, are discussed and illustrated with respect to current work.
Resumo:
This thesis in software engineering presents a novel automated framework to identify similar operations utilized by multiple algorithms for solving related computing problems. It provides a new effective solution to perform multi-application based algorithm analysis, employing fundamentally light-weight static analysis techniques compared to the state-of-art approaches. Significant performance improvements are achieved across the objective algorithms through enhancing the efficiency of the identified similar operations, targeting discrete application domains.
Resumo:
The purpose of this study was to examine the main and interactive effects of four dimensions of professional commitment on strain (i.e., depression, anxiety, perceived health status, and job dissatisfaction) for a sample of 176 law professionals. The study utilized a two-wave design in which professional commitment and strain were measured at Time 1 and strain was measured again at Time 2 (T2), 2 months later. A significant two-way interaction indicated that high affective commitment was related to less T2 job dissatisfaction only for lawyers with low accumulated costs. A significant four-way interaction indicated that high affective professional commitment was only related to fewer symptoms of T2 anxiety for lawyers with high normative professional commitment and both low limited alternatives and accumulated costs. A similar pattern of results emerged in regard to T2 perceived health status. The theoretical and practical implications of these results for career counselors are discussed.
Resumo:
QUT has enacted a university-wide Peer Program’s Strategy which aims to improve student success and graduate outcomes. A component of this strategy is a training model providing relevant, quality-assured and timely training for all students who take on leadership roles. The training model is designed to meet the needs of the growing scale and variety of peer programs, and to recognise the multiple roles and programs in which students may be involved during their peer leader journey. The model builds peer leader capacity by offering centralised, beginning and ongoing training modules, delivered by in-house providers, covering topics which prepare students to perform their role safely, inclusively, accountably and skilfully. The model also provides efficiencies by differentiating between ‘core competency' and ‘program-specific’ modules, thus avoiding training duplication across multiple programs, and enabling training to be individually and flexibly formatted to suit the specific and unique needs of each program.
Resumo:
The world and its peoples are facing multiple, complex challenges and we cannot continue as we are (Moss, 2010). Earth‘s “natural capital” - nature‘s ability to provide essential ecosystem services to stabilize world climate systems, maintain water quality, support secure food production, supply energy needs, moderate environmental impacts, and ensure social harmony and equity – is seriously compromised (Gough, 2005; Hawkins, Lovins & Lovins, 1999). To further summarize, current rates of resource consumption by the global human population are unsustainable (Kitzes, Peller, Goldfinger & Wackernagel, 2007) for human and non-human species, and for future generations. Further, continuing growth in world population and global political commitment to growth economics compounds these demands. Despite growing recognition of the serious consequences for people and planet, little consideration is given, within most nations, to the social and environmental issues that economic growth brings. For example, Australia is recognised as one of the developed countries most vulnerable to the impacts of climate change. Yet, to date, responses (such as carbon pricing) have been small-scale, fragmented, and their worth disputed, even ridiculed. This is at a time referred to as ‘the critical decade’ (Hughes & McMichael, 2011) when the world’s peoples must make strong choices if we are to avert the worst impacts of climate change.
Resumo:
Pilot and industrial scale dilute acid pretreatment data can be difficult to obtain due to the significant infrastructure investment required. Consequently, models of dilute acid pretreatment by necessity use laboratory scale data to determine kinetic parameters and make predictions about optimal pretreatment conditions at larger scales. In order for these recommendations to be meaningful, the ability of laboratory scale models to predict pilot and industrial scale yields must be investigated. A mathematical model of the dilute acid pretreatment of sugarcane bagasse has previously been developed by the authors. This model was able to successfully reproduce the experimental yields of xylose and short chain xylooligomers obtained at the laboratory scale. In this paper, the ability of the model to reproduce pilot scale yield and composition data is examined. It was found that in general the model over predicted the pilot scale reactor yields by a significant margin. Models that appear very promising at the laboratory scale may have limitations when predicting yields on a pilot or industrial scale. It is difficult to comment whether there are any consistent trends in optimal operating conditions between reactor scale and laboratory scale hydrolysis due to the limited reactor datasets available. Further investigation is needed to determine whether the model has some efficacy when the kinetic parameters are re-evaluated by parameter fitting to reactor scale data, however, this requires the compilation of larger datasets. Alternatively, laboratory scale mathematical models may have enhanced utility for predicting larger scale reactor performance if bulk mass transport and fluid flow considerations are incorporated into the fibre scale equations. This work reinforces the need for appropriate attention to be paid to pilot scale experimental development when moving from laboratory to pilot and industrial scales for new technologies.