957 resultados para Chemo-spectrophotometric evolution models
Resumo:
A range of authors from the risk management, crisis management, and crisis communications literature have proposed different models as a means of understanding components of crisis. A generic component of these sources has focused on preparedness practices before disturbance events and response practices during events. This paper provides a critical analysis of three key explanatory models of how crises escalate highlighting the strengths and limitations of each approach. The paper introduces an optimised conceptual model utilising components from the previous work under the four phases of pre-event, response, recovery, and post-event. Within these four phases, a ten step process is introduced that can enhance understanding of the progression of distinct stages of disturbance for different types of events. This crisis evolution framework is examined as a means to provide clarity and applicability to a range of infrastructure failure contexts and provide a path for further empirical investigation in this area.
Resumo:
Guardianship laws in most Western societies provide decision-making mechanisms for adults with impaired capacity. Since the inception of these laws, the principle of autonomy and recognition of human rights for those coming within guardianship regimes has gained prominence. A new legal model has emerged, which seeks to incorporate ‘assisted decision-making’ models into guardianship laws. Such models legally recognise that an adult’s capacity may be maintained through assistance or support provided by another person, and provide formal recognition of the person in that ‘assisting’ role. This article situates this latest legal innovation within a historical context, examining the social and legal evolution of guardianship laws and determining whether modern assisted decision-making models remain consistent with guardianship reform thus far. It identifies and critically analyses the different assisted decision-making models which exist internationally. Finally, it discusses a number of conceptual, legal and practical concerns that remain unresolved. These issues require serious consideration before assisted decisionmaking models are adopted in guardianship regimes in Australia.
Resumo:
This dissertation seeks to define and classify potential forms of Nonlinear structure and explore the possibilities they afford for the creation of new musical works. It provides the first comprehensive framework for the discussion of Nonlinear structure in musical works and provides a detailed overview of the rise of nonlinearity in music during the 20th century. Nonlinear events are shown to emerge through significant parametrical discontinuity at the boundaries between regions of relatively strong internal cohesion. The dissertation situates Nonlinear structures in relation to linear structures and unstructured sonic phenomena and provides a means of evaluating Nonlinearity in a musical structure through the consideration of the degree to which the structure is integrated, contingent, compressible and determinate as a whole. It is proposed that Nonlinearity can be classified as a three dimensional space described by three continuums: the temporal continuum, encompassing sequential and multilinear forms of organization, the narrative continuum encompassing processual, game structure and developmental narrative forms and the referential continuum encompassing stylistic allusion, adaptation and quotation. The use of spectrograms of recorded musical works is proposed as a means of evaluating Nonlinearity in a musical work through the visual representation of parametrical divergence in pitch, duration, timbre and dynamic over time. Spectral and structural analysis of repertoire works is undertaken as part of an exploration of musical nonlinearity and the compositional and performative features that characterize it. The contribution of cultural, ideological, scientific and technological shifts to the emergence of Nonlinearity in music is discussed and a range of compositional factors that contributed to the emergence of musical Nonlinearity is examined. The evolution of notational innovations from the mobile score to the screen score is plotted and a novel framework for the discussion of these forms of musical transmission is proposed. A computer coordinated performative model is discussed, in which a computer synchronises screening of notational information, provides temporal coordination of the performers through click-tracks or similar methods and synchronises the audio processing and synthesized elements of the work. It is proposed that such a model constitutes a highly effective means of realizing complex Nonlinear structures. A creative folio comprising 29 original works that explore nonlinearity is presented, discussed and categorised utilising the proposed classifications. Spectrograms of these works are employed where appropriate to illustrate the instantiation of parametrically divergent substructures and examples of structural openness through multiple versioning.
Resumo:
Agent-based modelling (ABM), like other modelling techniques, is used to answer specific questions from real world systems that could otherwise be expensive or impractical. Its recent gain in popularity can be attributed to some degree to its capacity to use information at a fine level of detail of the system, both geographically and temporally, and generate information at a higher level, where emerging patterns can be observed. This technique is data-intensive, as explicit data at a fine level of detail is used and it is computer-intensive as many interactions between agents, which can learn and have a goal, are required. With the growing availability of data and the increase in computer power, these concerns are however fading. Nonetheless, being able to update or extend the model as more information becomes available can become problematic, because of the tight coupling of the agents and their dependence on the data, especially when modelling very large systems. One large system to which ABM is currently applied is the electricity distribution where thousands of agents representing the network and the consumers’ behaviours are interacting with one another. A framework that aims at answering a range of questions regarding the potential evolution of the grid has been developed and is presented here. It uses agent-based modelling to represent the engineering infrastructure of the distribution network and has been built with flexibility and extensibility in mind. What distinguishes the method presented here from the usual ABMs is that this ABM has been developed in a compositional manner. This encompasses not only the software tool, which core is named MODAM (MODular Agent-based Model) but the model itself. Using such approach enables the model to be extended as more information becomes available or modified as the electricity system evolves, leading to an adaptable model. Two well-known modularity principles in the software engineering domain are information hiding and separation of concerns. These principles were used to develop the agent-based model on top of OSGi and Eclipse plugins which have good support for modularity. Information regarding the model entities was separated into a) assets which describe the entities’ physical characteristics, and b) agents which describe their behaviour according to their goal and previous learning experiences. This approach diverges from the traditional approach where both aspects are often conflated. It has many advantages in terms of reusability of one or the other aspect for different purposes as well as composability when building simulations. For example, the way an asset is used on a network can greatly vary while its physical characteristics are the same – this is the case for two identical battery systems which usage will vary depending on the purpose of their installation. While any battery can be described by its physical properties (e.g. capacity, lifetime, and depth of discharge), its behaviour will vary depending on who is using it and what their aim is. The model is populated using data describing both aspects (physical characteristics and behaviour) and can be updated as required depending on what simulation is to be run. For example, data can be used to describe the environment to which the agents respond to – e.g. weather for solar panels, or to describe the assets and their relation to one another – e.g. the network assets. Finally, when running a simulation, MODAM calls on its module manager that coordinates the different plugins, automates the creation of the assets and agents using factories, and schedules their execution which can be done sequentially or in parallel for faster execution. Building agent-based models in this way has proven fast when adding new complex behaviours, as well as new types of assets. Simulations have been run to understand the potential impact of changes on the network in terms of assets (e.g. installation of decentralised generators) or behaviours (e.g. response to different management aims). While this platform has been developed within the context of a project focussing on the electricity domain, the core of the software, MODAM, can be extended to other domains such as transport which is part of future work with the addition of electric vehicles.
Resumo:
Many model-based investigation techniques, such as sensitivity analysis, optimization, and statistical inference, require a large number of model evaluations to be performed at different input and/or parameter values. This limits the application of these techniques to models that can be implemented in computationally efficient computer codes. Emulators, by providing efficient interpolation between outputs of deterministic simulation models, can considerably extend the field of applicability of such computationally demanding techniques. So far, the dominant techniques for developing emulators have been priors in the form of Gaussian stochastic processes (GASP) that were conditioned with a design data set of inputs and corresponding model outputs. In the context of dynamic models, this approach has two essential disadvantages: (i) these emulators do not consider our knowledge of the structure of the model, and (ii) they run into numerical difficulties if there are a large number of closely spaced input points as is often the case in the time dimension of dynamic models. To address both of these problems, a new concept of developing emulators for dynamic models is proposed. This concept is based on a prior that combines a simplified linear state space model of the temporal evolution of the dynamic model with Gaussian stochastic processes for the innovation terms as functions of model parameters and/or inputs. These innovation terms are intended to correct the error of the linear model at each output step. Conditioning this prior to the design data set is done by Kalman smoothing. This leads to an efficient emulator that, due to the consideration of our knowledge about dominant mechanisms built into the simulation model, can be expected to outperform purely statistical emulators at least in cases in which the design data set is small. The feasibility and potential difficulties of the proposed approach are demonstrated by the application to a simple hydrological model.
Resumo:
Most of existing motorway traffic safety studies using disaggregate traffic flow data aim at developing models for identifying real-time traffic risks by comparing pre-crash and non-crash conditions. One of serious shortcomings in those studies is that non-crash conditions are arbitrarily selected and hence, not representative, i.e. selected non-crash data might not be the right data comparable with pre-crash data; the non-crash/pre-crash ratio is arbitrarily decided and neglects the abundance of non-crash over pre-crash conditions; etc. Here, we present a methodology for developing a real-time MotorwaY Traffic Risk Identification Model (MyTRIM) using individual vehicle data, meteorological data, and crash data. Non-crash data are clustered into groups called traffic regimes. Thereafter, pre-crash data are classified into regimes to match with relevant non-crash data. Among totally eight traffic regimes obtained, four highly risky regimes were identified; three regime-based Risk Identification Models (RIM) with sufficient pre-crash data were developed. MyTRIM memorizes the latest risk evolution identified by RIM to predict near future risks. Traffic practitioners can decide MyTRIM’s memory size based on the trade-off between detection and false alarm rates. Decreasing the memory size from 5 to 1 precipitates the increase of detection rate from 65.0% to 100.0% and of false alarm rate from 0.21% to 3.68%. Moreover, critical factors in differentiating pre-crash and non-crash conditions are recognized and usable for developing preventive measures. MyTRIM can be used by practitioners in real-time as an independent tool to make online decision or integrated with existing traffic management systems.
Resumo:
There is a wide variety of drivers for business process modelling initiatives, reaching from business evolution and process optimisation over compliance checking and process certification to process enactment. That, in turn, results in models that differ in content due to serving different purposes. In particular, processes are modelled on different abstraction levels and assume different perspectives. Vertical alignment of process models aims at handling these deviations. While the advantages of such an alignment for inter-model analysis and change propagation are out of question, a number of challenges has still to be addressed. In this paper, we discuss three main challenges for vertical alignment in detail. Against this background, the potential application of techniques from the field of process integration is critically assessed. Based thereon, we identify specific research questions that guide the design of a framework for model alignment.
Resumo:
This paper addresses the problem of determining optimal designs for biological process models with intractable likelihoods, with the goal of parameter inference. The Bayesian approach is to choose a design that maximises the mean of a utility, and the utility is a function of the posterior distribution. Therefore, its estimation requires likelihood evaluations. However, many problems in experimental design involve models with intractable likelihoods, that is, likelihoods that are neither analytic nor can be computed in a reasonable amount of time. We propose a novel solution using indirect inference (II), a well established method in the literature, and the Markov chain Monte Carlo (MCMC) algorithm of Müller et al. (2004). Indirect inference employs an auxiliary model with a tractable likelihood in conjunction with the generative model, the assumed true model of interest, which has an intractable likelihood. Our approach is to estimate a map between the parameters of the generative and auxiliary models, using simulations from the generative model. An II posterior distribution is formed to expedite utility estimation. We also present a modification to the utility that allows the Müller algorithm to sample from a substantially sharpened utility surface, with little computational effort. Unlike competing methods, the II approach can handle complex design problems for models with intractable likelihoods on a continuous design space, with possible extension to many observations. The methodology is demonstrated using two stochastic models; a simple tractable death process used to validate the approach, and a motivating stochastic model for the population evolution of macroparasites.
Resumo:
This paper investigates compressed sensing using hidden Markov models (HMMs) and hence provides an extension of recent single frame, bounded error sparse decoding problems into a class of sparse estimation problems containing both temporal evolution and stochastic aspects. This paper presents two optimal estimators for compressed HMMs. The impact of measurement compression on HMM filtering performance is experimentally examined in the context of an important image based aircraft target tracking application. Surprisingly, tracking of dim small-sized targets (as small as 5-10 pixels, with local detectability/SNR as low as − 1.05 dB) was only mildly impacted by compressed sensing down to 15% of original image size.
Resumo:
Recent natural disasters such as the Haitian earthquake 2011, the South East Queensland floods 2011, the Japanese earthquake and tsunami 2011 and Hurricane Sandy in the United States of America in 2012, have seen social media platforms changing the face of emergency management communications, not only in times of crisis and also during business-as-usual operations. With social media being such an important and powerful communication tool, especially for emergency management organisations, the question arises as to whether the use of social media in these organisations emerged by considered strategic design or more as a reactive response to a new and popular communication channel. This paper provides insight into how the social media function has been positioned, staffed and managed in organisations throughout the world, with a particular focus on how these factors influence the style of communication used on social media platforms. This study has identified that the social media function falls on a continuum between two polarised models, namely the authoritative one-way communication approach and the more interactive approach that seeks to network and engage with the community through multi-way communication. Factors such the size of the organisation; dedicated resourcing of the social media function; organisational culture and mission statement; the presence of a social media champion within the organisation; management style and knowledge about social media play a key role in determining where on the continuum organisations sit in relation to their social media capability. This review, together with a forthcoming survey of Australian emergency management organisations and local governments, will fill a gap in the current body of knowledge about the evolution, positioning and usage of social media in organisations working in the emergency management field in Australia. These findings will be fed back to Industry for potential inclusion in future strategies and practices.
Resumo:
Using established strategic management and business model frameworks we map the evolution of universities in the context of their value proposition to students as consumers of their products. We argue that in the main universities over time have transitioned from a value-based business model through to an efficiency-based business model that for numerous reasons, is becoming rapidly unsustainable. We further argue that the future university business models would benefit with a reconfiguration towards a network value based model. This approach requires a revised set of perceived benefits, better aligned to the current and future expectations and an alternate approach to the delivery of those benefits to learner / consumers.
Resumo:
Phenols are well known noxious compounds, which are often found in various water sources. A novel analytical method has been researched and developed based on the properties of hemin–graphene hybrid nanosheets (H–GNs). These nanosheets were synthesized using a wet-chemical method, and they have peroxidase-like activity. Also, in the presence of H2O2, the nanosheets are efficient catalysts for the oxidation of the substrate, 4-aminoantipine (4-AP), and the phenols. The products of such an oxidation reaction are the colored quinone-imines (benzodiazepines). Importantly, these products enabled the differentiation of the three common phenols – pyrocatechol, resorcin and hydroquinone, with the use of a novel, spectroscopic method, which was developed for the simultaneous determination of the above three analytes. This spectroscopic method produced linear calibrations for the pyrocatechol (0.4–4.0 mg L−1), resorcin (0.2–2.0 mg L−1) and hydroquinone (0.8–8.0 mg L−1) analytes. In addition, kinetic and spectral data, obtained from the formation of the colored benzodiazepines, were used to establish multi-variate calibrations for the prediction of the three phenol analytes found in various kinds of water; partial least squares (PLS), principal component regression (PCR) and artificial neural network (ANN) models were used and the PLS model performed best.
Resumo:
Access to nutritious, safe and culturally appropriate food is a basic human right (Mechlem, 2004). Food sovereignty defines this right through the empowerment of the people to redefine food and agricultural systems, and through ecologically sustainable production methods. At the heart of the food sovereignty movement are the interests of producers, distributors and consumers, rather than the interests of markets and corporations, which dominate the current globalized food system (Hinrichs, 2003). Food sovereignty challenges designers to enable people to innovate the food system. We are yet to develop economically viable solutions for scaling projects and providing citizens, governments and business with tools to develop and promote projects to innovate food systems and promote food sovereignty (Meroni, 2011; Murray, Caulier-Grice and Mulgan, 2010). This article examines how a design-led approach to innovation can assist in the development of new business models and ventures for local food systems: this is presented through an emerging field of research ‘Design-Led Food Communities’. Design-Led Food Communities enables citizens, governments and business to innovate local food projects through the application of design. This article reports on the case study of the Docklands Food Hub Project in Melbourne, Australia. Preliminary findings demonstrate valued outcomes, but also a deficiency in the design process to generate food solutions collaboratively between government, business and citizens.
Resumo:
In the present study a two dimensional model is first developed to show the behaviour of dense non-aqueous phase liquids (DNAPL) within a rough fracture. To consider the rough fracture, the fracture is imposed with variable apertures along its plane. It is found that DNAPL follows preferential pathways. In next part of the study the above model is further extended for non-isothermal DNAPL flow and DNAPL-water interphase mass transfer phenomenon. These two models are then coupled with joint deformation due to normal stresses. The primary focus of these models is specifically to elucidate the influence of joint alteration due to external stress and fluid pressures on flow driven energy transport and interphase mass transfer. For this, it is assumed that the critical value for joint alteration is associated with external stress and average of water and DNAPL pressures in multiphase system and the temporal and spatial evolution of joint alteration are determined for its further influence on energy transport and miscible phase transfer. The developed model has been studied to show the influence of deformation on DNAPL flow. Further this preliminary study demonstrates the influence of joint deformation on heat transport and phase miscibility via multiphase flow velocities. It is seen that the temperature profile changes and shows higher diffusivity due to deformation and although the interphase miscibility value decreases but the lateral dispersion increases to a considerably higher extent.
Resumo:
The precise timing of individual signals in response to those of signaling neighbors is seen in many animal species. Synchrony is the most striking of the resultant timing patterns. One of the best examples of acoustic synchrony is in katydid choruses where males produce chirps with a high degree of temporal overlap. Cooperative hypotheses that speculate on the evolutionary origins of acousti synchrony include the preservation of the species-specific call pattern, reduced predation risks, and increased call intensity. An alternative suggestion is that synchrony evolved as an epiphenomenon of competition between males in response to a female preference for chirps that lead other chirps. Previous models investigating the evolutionary origins of synchrony focused only on intrasexual competitive interactions. We investigated both competitive and cooperative hypotheses for the evolution of synchrony in the katydid Mecopoda ``Chirper'' using physiologically and ecologically realistic simulation models incorporating the natural variation in call features, ecology, female preferences, and spacing patterns, specifically aggregation. We found that although a female preference for leading chirps enables synchronous males to have some selective advantage, it is the female preference for the increased intensity of aggregations of synchronous males that enables synchrony to evolve as an evolutionarily stable strategy.