99 resultados para temperature-based models
Resumo:
This paper develops analytical distributions of temperature indices on which temperature derivatives are written. If the deviations of daily temperatures from their expected values are modelled as an Ornstein-Uhlenbeck process with timevarying variance, then the distributions of the temperature index on which the derivative is written is the sum of truncated, correlated Gaussian deviates. The key result of this paper is to provide an analytical approximation to the distribution of this sum, thus allowing the accurate computation of payoffs without the need for any simulation. A data set comprising average daily temperature spanning over a hundred years for four Australian cities is used to demonstrate the efficacy of this approach for estimating the payoffs to temperature derivatives. It is demonstrated that expected payoffs computed directly from historical records are a particularly poor approach to the problem when there are trends in underlying average daily temperature. It is shown that the proposed analytical approach is superior to historical pricing.
Resumo:
The identification of the primary drivers of stock returns has been of great interest to both financial practitioners and academics alike for many decades. Influenced by classical financial theories such as the CAPM (Sharp, 1964; Lintner, 1965) and APT (Ross, 1976), a linear relationship is conventionally assumed between company characteristics as derived from their financial accounts and forward returns. Whilst this assumption may be a fair approximation to the underlying structural relationship, it is often adopted for the purpose of convenience. It is actually quite rare that the assumptions of distributional normality and a linear relationship are explicitly assessed in advance even though this information would help to inform the appropriate choice of modelling technique. Non-linear models have nevertheless been applied successfully to the task of stock selection in the past (Sorensen et al, 2000). However, their take-up by the investment community has been limited despite the fact that researchers in other fields have found them to be a useful way to express knowledge and aid decision-making...
Resumo:
Assessing and prioritising cost-effective strategies to mitigate the impacts of traffic incidents and accidents on non-recurrent congestion on major roads represents a significant challenge for road network managers. This research examines the influence of numerous factors associated with incidents of various types on their duration. It presents a comprehensive traffic incident data mining and analysis by developing an incident duration model based on twelve months of incident data obtained from the Australian freeway network. Parametric accelerated failure time (AFT) survival models of incident duration were developed, including log-logistic, lognormal, and Weibul-considering both fixed and random parameters, as well as a Weibull model with gamma heterogeneity. The Weibull AFT models with random parameters were appropriate for modelling incident duration arising from crashes and hazards. A Weibull model with gamma heterogeneity was most suitable for modelling incident duration of stationary vehicles. Significant variables affecting incident duration include characteristics of the incidents (severity, type, towing requirements, etc.), and location, time of day, and traffic characteristics of the incident. Moreover, the findings reveal no significant effects of infrastructure and weather on incident duration. A significant and unique contribution of this paper is that the durations of each type of incident are uniquely different and respond to different factors. The results of this study are useful for traffic incident management agencies to implement strategies to reduce incident duration, leading to reduced congestion, secondary incidents, and the associated human and economic losses.
Resumo:
Agent-based modelling (ABM), like other modelling techniques, is used to answer specific questions from real world systems that could otherwise be expensive or impractical. Its recent gain in popularity can be attributed to some degree to its capacity to use information at a fine level of detail of the system, both geographically and temporally, and generate information at a higher level, where emerging patterns can be observed. This technique is data-intensive, as explicit data at a fine level of detail is used and it is computer-intensive as many interactions between agents, which can learn and have a goal, are required. With the growing availability of data and the increase in computer power, these concerns are however fading. Nonetheless, being able to update or extend the model as more information becomes available can become problematic, because of the tight coupling of the agents and their dependence on the data, especially when modelling very large systems. One large system to which ABM is currently applied is the electricity distribution where thousands of agents representing the network and the consumers’ behaviours are interacting with one another. A framework that aims at answering a range of questions regarding the potential evolution of the grid has been developed and is presented here. It uses agent-based modelling to represent the engineering infrastructure of the distribution network and has been built with flexibility and extensibility in mind. What distinguishes the method presented here from the usual ABMs is that this ABM has been developed in a compositional manner. This encompasses not only the software tool, which core is named MODAM (MODular Agent-based Model) but the model itself. Using such approach enables the model to be extended as more information becomes available or modified as the electricity system evolves, leading to an adaptable model. Two well-known modularity principles in the software engineering domain are information hiding and separation of concerns. These principles were used to develop the agent-based model on top of OSGi and Eclipse plugins which have good support for modularity. Information regarding the model entities was separated into a) assets which describe the entities’ physical characteristics, and b) agents which describe their behaviour according to their goal and previous learning experiences. This approach diverges from the traditional approach where both aspects are often conflated. It has many advantages in terms of reusability of one or the other aspect for different purposes as well as composability when building simulations. For example, the way an asset is used on a network can greatly vary while its physical characteristics are the same – this is the case for two identical battery systems which usage will vary depending on the purpose of their installation. While any battery can be described by its physical properties (e.g. capacity, lifetime, and depth of discharge), its behaviour will vary depending on who is using it and what their aim is. The model is populated using data describing both aspects (physical characteristics and behaviour) and can be updated as required depending on what simulation is to be run. For example, data can be used to describe the environment to which the agents respond to – e.g. weather for solar panels, or to describe the assets and their relation to one another – e.g. the network assets. Finally, when running a simulation, MODAM calls on its module manager that coordinates the different plugins, automates the creation of the assets and agents using factories, and schedules their execution which can be done sequentially or in parallel for faster execution. Building agent-based models in this way has proven fast when adding new complex behaviours, as well as new types of assets. Simulations have been run to understand the potential impact of changes on the network in terms of assets (e.g. installation of decentralised generators) or behaviours (e.g. response to different management aims). While this platform has been developed within the context of a project focussing on the electricity domain, the core of the software, MODAM, can be extended to other domains such as transport which is part of future work with the addition of electric vehicles.
Resumo:
Social networking sites (SNSs), with their large numbers of users and large information base, seem to be perfect breeding grounds for exploiting the vulnerabilities of people, the weakest link in security. Deceiving, persuading, or influencing people to provide information or to perform an action that will benefit the attacker is known as “social engineering.” While technology-based security has been addressed by research and may be well understood, social engineering is more challenging to understand and manage, especially in new environments such as SNSs, owing to some factors of SNSs that reduce the ability of users to detect the attack and increase the ability of attackers to launch it. This work will contribute to the knowledge of social engineering by presenting the first two conceptual models of social engineering attacks in SNSs. Phase-based and source-based models are presented, along with an intensive and comprehensive overview of different aspects of social engineering threats in SNSs.
Resumo:
Agent-based modeling and simulation (ABMS) may fit well with entrepreneurship research and practice because the core concepts and basic premises of entrepreneurship coincide with the characteristics of ABMS. However, it is difficult to find cases where ABMS is applied to entrepreneurship research. To apply ABMS to entrepreneurship and organization studies, designing a conceptual model is important; thus to effectively design a conceptual model, various mixed method approaches are being attempted. As a new mixed method approach to ABMS, this study proposes a bibliometric approach to designing agent based models, which establishes and analyzes a domain corpus. This study presents an example on the venture creation process using the bibliometric approach. This example shows us that the results of the multi-agent simulations on the venturing process based on the bibliometric approach are close to each nation’s surveyed data on the venturing activities. In conclusion, by the bibliometric approach proposed in this study, all the agents and the agents’ behaviors related to a phenomenon can be extracted effectively, and a conceptual model for ABMS can be designed with the agents and their behaviors. This study contributes to the entrepreneurship and organization studies by promoting the application of ABMS.
Resumo:
PURPOSE: This paper describes dynamic agent composition, used to support the development of flexible and extensible large-scale agent-based models (ABMs). This approach was motivated by a need to extend and modify, with ease, an ABM with an underlying networked structure as more information becomes available. Flexibility was also sought after so that simulations are set up with ease, without the need to program. METHODS: The dynamic agent composition approach consists in having agents, whose implementation has been broken into atomic units, come together at runtime to form the complex system representation on which simulations are run. These components capture information at a fine level of detail and provide a vast range of combinations and options for a modeller to create ABMs. RESULTS: A description of the dynamic agent composition is given in this paper, as well as details about its implementation within MODAM (MODular Agent-based Model), a software framework which is applied to the planning of the electricity distribution network. Illustrations of the implementation of the dynamic agent composition are consequently given for that domain throughout the paper. It is however expected that this approach will be beneficial to other problem domains, especially those with a networked structure, such as water or gas networks. CONCLUSIONS: Dynamic agent composition has many advantages over the way agent-based models are traditionally built for the users, the developers, as well as for agent-based modelling as a scientific approach. Developers can extend the model without the need to access or modify previously written code; they can develop groups of entities independently and add them to those already defined to extend the model. Users can mix-and-match already implemented components to form large-scales ABMs, allowing them to quickly setup simulations and easily compare scenarios without the need to program. The dynamic agent composition provides a natural simulation space over which ABMs of networked structures are represented, facilitating their implementation; and verification and validation of models is facilitated by quickly setting up alternative simulations.
Resumo:
This thesis presents a novel approach to building large-scale agent-based models of networked physical systems using a compositional approach to provide extensibility and flexibility in building the models and simulations. A software framework (MODAM - MODular Agent-based Model) was implemented for this purpose, and validated through simulations. These simulations allow assessment of the impact of technological change on the electricity distribution network looking at the trajectories of electricity consumption at key locations over many years.
Resumo:
Individual-based models describing the migration and proliferation of a population of cells frequently restrict the cells to a predefined lattice. An implicit assumption of this type of lattice based model is that a proliferative population will always eventually fill the lattice. Here we develop a new lattice-free individual-based model that incorporates cell-to-cell crowding effects. We also derive approximate mean-field descriptions for the lattice-free model in two special cases motivated by commonly used experimental setups. Lattice-free simulation results are compared to these mean-field descriptions and to a corresponding lattice-based model. Data from a proliferation experiment is used to estimate the parameters for the new model, including the cell proliferation rate, showing that the model fits the data well. An important aspect of the lattice-free model is that the confluent cell density is not predefined, as with lattice-based models, but an emergent model property. As a consequence of the more realistic, irregular configuration of cells in the lattice-free model, the population growth rate is much slower at high cell densities and the population cannot reach the same confluent density as an equivalent lattice-based model.
Resumo:
An information filtering (IF) system monitors an incoming document stream to find the documents that match the information needs specified by the user profiles. To learn to use the user profiles effectively is one of the most challenging tasks when developing an IF system. With the document selection criteria better defined based on the users’ needs, filtering large streams of information can be more efficient and effective. To learn the user profiles, term-based approaches have been widely used in the IF community because of their simplicity and directness. Term-based approaches are relatively well established. However, these approaches have problems when dealing with polysemy and synonymy, which often lead to an information overload problem. Recently, pattern-based approaches (or Pattern Taxonomy Models (PTM) [160]) have been proposed for IF by the data mining community. These approaches are better at capturing sematic information and have shown encouraging results for improving the effectiveness of the IF system. On the other hand, pattern discovery from large data streams is not computationally efficient. Also, these approaches had to deal with low frequency pattern issues. The measures used by the data mining technique (for example, “support” and “confidences”) to learn the profile have turned out to be not suitable for filtering. They can lead to a mismatch problem. This thesis uses the rough set-based reasoning (term-based) and pattern mining approach as a unified framework for information filtering to overcome the aforementioned problems. This system consists of two stages - topic filtering and pattern mining stages. The topic filtering stage is intended to minimize information overloading by filtering out the most likely irrelevant information based on the user profiles. A novel user-profiles learning method and a theoretical model of the threshold setting have been developed by using rough set decision theory. The second stage (pattern mining) aims at solving the problem of the information mismatch. This stage is precision-oriented. A new document-ranking function has been derived by exploiting the patterns in the pattern taxonomy. The most likely relevant documents were assigned higher scores by the ranking function. Because there is a relatively small amount of documents left after the first stage, the computational cost is markedly reduced; at the same time, pattern discoveries yield more accurate results. The overall performance of the system was improved significantly. The new two-stage information filtering model has been evaluated by extensive experiments. Tests were based on the well-known IR bench-marking processes, using the latest version of the Reuters dataset, namely, the Reuters Corpus Volume 1 (RCV1). The performance of the new two-stage model was compared with both the term-based and data mining-based IF models. The results demonstrate that the proposed information filtering system outperforms significantly the other IF systems, such as the traditional Rocchio IF model, the state-of-the-art term-based models, including the BM25, Support Vector Machines (SVM), and Pattern Taxonomy Model (PTM).
Resumo:
Recently published studies not only demonstrated that laser printers are often significant sources of ultrafine particles, but they also shed light on particle formation mechanisms. While the role of fuser roller temperature as a factor affecting particle formation rate has been postulated, its impact has never been quantified. To address this gap in knowledge, this study measured emissions from 30 laser printers in chamber using a standardized printing sequence, as well as monitoring fuser roller temperature. Based on a simplified mass balance equation, the average emission rates of particle number, PM2.5 and O3 were calculated. The results showed that: almost all printers were found to be high particle number emitters (i.e. > 1.01×1010 particles/min); colour printing generated more PM2.5 than monochrome printing; and all printers generated significant amounts of O3. Particle number emissions varied significantly during printing and followed the cycle of fuser roller temperature variation, which points to temperature being the strongest factor controlling emissions. For two sub-groups of printers using the same technology (heating lamps), systematic positive correlations, in the form of a power law, were found between average particle number emission rate and average roller temperature. Other factors, such as fuser material and structure, are also thought to play a role, since no such correlation was found for the remaining two sub-groups of printers using heating lamps, or for the printers using heating strips. In addition, O3 and total PM2.5 were not found to be statistically correlated with fuser temperature.
Resumo:
In the exclusion-process literature, mean-field models are often derived by assuming that the occupancy status of lattice sites is independent. Although this assumption is questionable, it is the foundation of many mean-field models. In this work we develop methods to relax the independence assumption for a range of discrete exclusion process-based mechanisms motivated by applications from cell biology. Previous investigations that focussed on relaxing the independence assumption have been limited to studying initially-uniform populations and ignored any spatial variations. By ignoring spatial variations these previous studies were greatly simplified due to translational invariance of the lattice. These previous corrected mean-field models could not be applied to many important problems in cell biology such as invasion waves of cells that are characterised by moving fronts. Here we propose generalised methods that relax the independence assumption for spatially inhomogeneous problems, leading to corrected mean-field descriptions of a range of exclusion process-based models that incorporate (i) unbiased motility, (ii) biased motility, and (iii) unbiased motility with agent birth and death processes. The corrected mean-field models derived here are applicable to spatially variable processes including invasion wave type problems. We show that there can be large deviations between simulation data and traditional mean-field models based on invoking the independence assumption. Furthermore, we show that the corrected mean-field models give an improved match to the simulation data in all cases considered.
Resumo:
Orthopaedic fracture fixation implants are increasingly being designed using accurate 3D models of long bones based on computer tomography (CT). Unlike CT, magnetic resonance imaging (MRI) does not involve ionising radiation and is therefore a desirable alternative to CT. This study aims to quantify the accuracy of MRI-based 3D models compared to CT-based 3D models of long bones. The femora of five intact cadaver ovine limbs were scanned using a 1.5T MRI and a CT scanner. Image segmentation of CT and MRI data was performed using a multi-threshold segmentation method. Reference models were generated by digitising the bone surfaces free of soft tissue with a mechanical contact scanner. The MRI- and CT-derived models were validated against the reference models. The results demonstrated that the CT-based models contained an average error of 0.15mm while the MRI-based models contained an average error of 0.23mm. Statistical validation shows that there are no significant differences between 3D models based on CT and MRI data. These results indicate that the geometric accuracy of MRI based 3D models was comparable to that of CT-based models and therefore MRI is a potential alternative to CT for generation of 3D models with high geometric accuracy.
Resumo:
Language Modeling (LM) has been successfully applied to Information Retrieval (IR). However, most of the existing LM approaches only rely on term occurrences in documents, queries and document collections. In traditional unigram based models, terms (or words) are usually considered to be independent. In some recent studies, dependence models have been proposed to incorporate term relationships into LM, so that links can be created between words in the same sentence, and term relationships (e.g. synonymy) can be used to expand the document model. In this study, we further extend this family of dependence models in the following two ways: (1) Term relationships are used to expand query model instead of document model, so that query expansion process can be naturally implemented; (2) We exploit more sophisticated inferential relationships extracted with Information Flow (IF). Information flow relationships are not simply pairwise term relationships as those used in previous studies, but are between a set of terms and another term. They allow for context-dependent query expansion. Our experiments conducted on TREC collections show that we can obtain large and significant improvements with our approach. This study shows that LM is an appropriate framework to implement effective query expansion.
Numerical and experimental studies of cold-formed steel floor systems under standard fire conditions
Resumo:
Light gauge cold-formed steel frame (LSF) structures are increasingly used in industrial, commercial and residential buildings because of their non-combustibility, dimensional stability, and ease of installation. A floor-ceiling system is an example of its applications. LSF floor-ceiling systems must be designed to serve as fire compartment boundaries and provide adequate fire resistance. Fire rated floor-ceiling assemblies formed with new materials and construction methodologies have been increasingly used in buildings. However, limited research has been undertaken in the past and hence a thorough understanding of their fire resistance behaviour is not available. Recently a new composite panel in which an external insulation layer is used between two plasterboards has been developed at QUT to provide a higher fire rating to LSF floors under standard fire conditions. But its increased fire rating could not be determined using the currently available design methods. Research on LSF floor systems under fire conditions is relatively recent and the behaviour of floor joists and other components in the systems is not fully understood. The present design methods thus require the use of expensive fire protection materials to protect them from excessive heat increase during a fire. This leads to uneconomical and conservative designs. Fire rating of these floor systems is provided simply by adding more plasterboard sheets to the steel joists and such an approach is totally inefficient. Hence a detailed fire research study was undertaken into the structural and thermal performance of LSF floor systems including those protected by the new composite panel system using full scale fire tests and extensive numerical studies. Experimental study included both the conventional and the new steel floor-ceiling systems under structural and fire loads using a gas furnace designed to deliver heat in accordance with the standard time- temperature curve in AS 1530.4 (SA, 2005). Fire tests included the behavioural and deflection characteristics of LSF floor joists until failure as well as related time-temperature measurements across the section and along the length of all the specimens. Full scale fire tests have shown that the structural and thermal performance of externally insulated LSF floor system was superior than traditional LSF floors with or without cavity insulation. Therefore this research recommends the use of the new composite panel system for cold-formed LSF floor-ceiling systems. The numerical analyses of LSF floor joists were undertaken using the finite element program ABAQUS based on the measured time-temperature profiles obtained from fire tests under both steady state and transient state conditions. Mechanical properties at elevated temperatures were considered based on the equations proposed by Dolamune Kankanamge and Mahendran (2011). Finite element models were calibrated using the full scale test results and used to further provide a detailed understanding of the structural fire behaviour of the LSF floor-ceiling systems. The models also confirmed the superior performance of the new composite panel system. The validated model was then used in a detailed parametric study. Fire tests and the numerical studies showed that plasterboards provided sufficient lateral restraint to LSF floor joists until their failure. Hence only the section moment capacity of LSF floor joists subjected to local buckling effects was considered in this research. To predict the section moment capacity at elevated temperatures, the effective section modulus of joists at ambient temperature is generally considered adequate. However, this research has shown that it leads to considerable over- estimation of the local buckling capacity of joist subject to non-uniform temperature distributions under fire conditions. Therefore new simplified fire design rules were proposed for LSF floor joist to determine the section moment capacity at elevated temperature based on AS/NZS 4600 (SA, 2005), NAS (AISI, 2007) and Eurocode 3 Part 1.3 (ECS, 2006). The accuracy of the proposed fire design rules was verified with finite element analysis results. A spread sheet based design tool was also developed based on these design rules to predict the failure load ratio versus time, moment capacity versus time and temperature for various LSF floor configurations. Idealised time-temperature profiles of LSF floor joists were developed based on fire test measurements. They were used in the detailed parametric study to fully understand the structural and fire behaviour of LSF floor panels. Simple design rules were also proposed to predict both critical average joist temperatures and failure times (fire rating) of LSF floor systems with various floor configurations and structural parameters under any given load ratio. Findings from this research have led to a comprehensive understanding of the structural and fire behaviour of LSF floor systems including those protected by the new composite panel, and simple design methods. These design rules were proposed within the guidelines of the Australian/New Zealand, American and European cold- formed steel structures standard codes of practice. These may also lead to further improvements to fire resistance through suitable modifications to the current composite panel system.