123 resultados para dynamic factor models
Resumo:
Building information models have created a paradigm shift in how buildings are built and managed by providing a dynamic repository for building data that is useful in many new operational scenarios. This change has also created an opportunity to use building information models as an integral part of security operations and especially as a tool to facilitate fine-grained access control to building spaces in smart buildings and critical infrastructure environments. In this paper, we identify the requirements for a security policy model for such an access control system and discuss why the existing policy models are not suitable for this application. We propose a new policy language extension to XACML, with BIM specific data types and functions based on the IFC specification, which we call BIM-XACML.
Resumo:
In this paper, we present a dynamic model to identify influential users of micro-blogging services. Micro-blogging services, such as Twitter, allow their users (twitterers) to publish tweets and choose to follow other users to receive tweets. Previous work on user influence on Twitter, concerns more on following link structure and the contents user published, seldom emphasizes the importance of interactions among users. We argue that, by emphasizing on user actions in micro-blogging platform, user influence could be measured more accurately. Since micro-blogging is a powerful social media and communication platform, identifying influential users according to user interactions has more practical meanings, e.g., advertisers may concern how many actions – buying, in this scenario – the influential users could initiate rather than how many advertisements they spread. By introducing the idea of PageRank algorithm, innovatively, we propose our model using action-based network which could capture the ability of influential users when they interacting with micro-blogging platform. Taking the evolving prosperity of micro-blogging into consideration, we extend our actionbaseduser influence model into a dynamic one, which could distinguish influential users in different time periods. Simulation results demonstrate that our models could support and give reasonable explanations for the scenarios that we considered.
Resumo:
As a key element in their response to new media forcing transformations in mass media and media use, newspapers have deployed various strategies to not only establish online and mobile products, and develop healthy business plans, but to set out to be dominant portals. Their response to change was the subject of an early investigation by one of the present authors (Keshvani 2000). That was part of a set of short studies inquiring into what impact new software applications and digital convergence might have on journalism practice (Tickle and Keshvani 2000), and also looking for demonstrations of the way that innovations, technologies and protocols then under development might produce a “wireless, streamlined electronic news production process (Tickle and Keshvani 2001).” The newspaper study compared the online products of The Age in Melbourne and the Straits Times in Singapore. It provided an audit of the Singapore and Australia Information and Communications Technology (ICT) climate concentrating on the state of development of carrier networks, as a determining factor in the potential strength of the two services with their respective markets. In the outcome, contrary to initial expectations, the early cable roll-out and extensive ‘wiring’ of the city in Singapore had not produced a level of uptake of Internet services as strong as that achieved in Melbourne by more ad hoc and varied strategies. By interpretation, while news websites and online content were at an early stage of development everywhere, and much the same as one another, no determining structural imbalance existed to separate these leading media participants in Australia and South-east Asia. The present research revisits that situation, by again studying the online editions of the two large newspapers in the original study, and one other, The Courier Mail, (recognising the diversification of types of product in this field, by including it as a representative of Newscorp, now a major participant). The inquiry works through the principle of comparison. It is an exercise in qualitative, empirical research that establishes a comparison between the situation in 2000 as described in the earlier work, and the situation in 2014, after a decade of intense development in digital technology affecting the media industries. It is in that sense a follow-up study on the earlier work, although this time giving emphasis to content and style of the actual products as experienced by their users. It compares the online and print editions of each of these three newspapers; then the three mastheads as print and online entities, among themselves; and finally it compares one against the other two, as representing a South-east Asian model and Australian models. This exercise is accompanied by a review of literature on the developments in ICT affecting media production and media organisations, to establish the changed context. The new study of the online editions is conducted as a systematic appraisal of the first level, or principal screens, of the three publications, over the course of six days (10-15.2.14 inclusive). For this, categories for analysis were made, through conducting a preliminary examination of the products over three days in the week before. That process identified significant elements of media production, such as: variegated sourcing of materials; randomness in the presentation of items; differential production values among media platforms considered, whether text, video or stills images; the occasional repurposing and repackaging of top news stories of the day and the presence of standard news values – once again drawn out of the trial ‘bundle’ of journalistic items. Reduced in this way the online artefacts become comparable with the companion print editions from the same days. The categories devised and then used in the appraisal of the online products have been adapted to print, to give the closest match of sets of variables. This device, to study the two sets of publications on like standards -- essentially production values and news values—has enabled the comparisons to be made. This comparing of the online and print editions of each of the three publications was set up as up the first step in the investigation. In recognition of the nature of the artefacts, as ones that carry very diverse information by subject and level of depth, and involve heavy creative investment in the formulation and presentation of the information; the assessment also includes an open section for interpreting and commenting on main points of comparison. This takes the form of a field for text, for the insertion of notes, in the table employed for summarising the features of each product, for each day. When the sets of comparisons as outlined above are noted, the process then becomes interpretative, guided by the notion of change. In the context of changing media technology and publication processes, what substantive alterations have taken place, in the overall effort of news organisations in the print and online fields since 2001; and in their print and online products separately? Have they diverged or continued along similar lines? The remaining task is to begin to make inferences from that. Will the examination of findings enforce the proposition that a review of the earlier study, and a forensic review of new models, does provide evidence of the character and content of change --especially change in journalistic products and practice? Will it permit an authoritative description on of the essentials of such change in products and practice? Will it permit generalisation, and provide a reliable base for discussion of the implications of change, and future prospects? Preliminary observations suggest a more dynamic and diversified product has been developed in Singapore, well themed, obviously sustained by public commitment and habituation to diversified online and mobile media services. The Australian products suggest a concentrated corporate and journalistic effort and deployment of resources, with a strong market focus, but less settled and ordered, and showing signs of limitations imposed by the delay in establishing a uniform, large broadband network. The scope of the study is limited. It is intended to test, and take advantage of the original study as evidentiary material from the early days of newspaper companies’ experimentation with online formats. Both are small studies. The key opportunity for discovery lies in the ‘time capsule’ factor; the availability of well-gathered and processed information on major newspaper company production, at the threshold of a transformational decade of change in their industry. The comparison stands to identify key changes. It should also be useful as a reference for further inquiries of the same kind that might be made, and for monitoring of the situation in regard to newspaper portals on line, into the future.
Resumo:
Introduction Risk factor analyses for nosocomial infections (NIs) are complex. First, due to competing events for NI, the association between risk factors of NI as measured using hazard rates may not coincide with the association using cumulative probability (risk). Second, patients from the same intensive care unit (ICU) who share the same environmental exposure are likely to be more similar with regard to risk factors predisposing to a NI than patients from different ICUs. We aimed to develop an analytical approach to account for both features and to use it to evaluate associations between patient- and ICU-level characteristics with both rates of NI and competing risks and with the cumulative probability of infection. Methods We considered a multicenter database of 159 intensive care units containing 109,216 admissions (813,739 admission-days) from the Spanish HELICS-ENVIN ICU network. We analyzed the data using two models: an etiologic model (rate based) and a predictive model (risk based). In both models, random effects (shared frailties) were introduced to assess heterogeneity. Death and discharge without NI are treated as competing events for NI. Results There was a large heterogeneity across ICUs in NI hazard rates, which remained after accounting for multilevel risk factors, meaning that there are remaining unobserved ICU-specific factors that influence NI occurrence. Heterogeneity across ICUs in terms of cumulative probability of NI was even more pronounced. Several risk factors had markedly different associations in the rate-based and risk-based models. For some, the associations differed in magnitude. For example, high Acute Physiology and Chronic Health Evaluation II (APACHE II) scores were associated with modest increases in the rate of nosocomial bacteremia, but large increases in the risk. Others differed in sign, for example respiratory vs cardiovascular diagnostic categories were associated with a reduced rate of nosocomial bacteremia, but an increased risk. Conclusions A combination of competing risks and multilevel models is required to understand direct and indirect risk factors for NI and distinguish patient-level from ICU-level factors.
Resumo:
Extant models of decision making in social neurobiological systems have typically explained task dynamics as characterized by transitions between two attractors. In this paper, we model a three-attractor task exemplified in a team sport context. The model showed that an attacker–defender dyadic system can be described by the angle x between a vector connecting the participants and the try line. This variable was proposed as an order parameter of the system and could be dynamically expressed by integrating a potential function. Empirical evidence has revealed that this kind of system has three stable attractors, with a potential function of the form V(x)=−k1x+k2ax2/2−bx4/4+x6/6, where k1 and k2 are two control parameters. Random fluctuations were also observed in system behavior, modeled as white noise εt, leading to the motion equation dx/dt = −dV/dx+Q0.5εt, where Q is the noise variance. The model successfully mirrored the behavioral dynamics of agents in a social neurobiological system, exemplified by interactions of players in a team sport.
Resumo:
Computational neuroscience aims to elucidate the mechanisms of neural information processing and population dynamics, through a methodology of incorporating biological data into complex mathematical models. Existing simulation environments model at a particular level of detail; none allow a multi-level approach to neural modelling. Moreover, most are not engineered to produce compute-efficient solutions, an important issue because sufficient processing power is a major impediment in the field. This project aims to apply modern software engineering techniques to create a flexible high performance neural modelling environment, which will allow rigorous exploration of model parameter effects, and modelling at multiple levels of abstraction.
Resumo:
The quality of environmental decisions should be gauged according to managers' objectives. Management objectives generally seek to maximize quantifiable measures of system benefit, for instance population growth rate. Reaching these goals often requires a certain degree of learning about the system. Learning can occur by using management action in combination with a monitoring system. Furthermore, actions can be chosen strategically to obtain specific kinds of information. Formal decision making tools can choose actions to favor such learning in two ways: implicitly via the optimization algorithm that is used when there is a management objective (for instance, when using adaptive management), or explicitly by quantifying knowledge and using it as the fundamental project objective, an approach new to conservation.This paper outlines three conservation project objectives - a pure management objective, a pure learning objective, and an objective that is a weighted mixture of these two. We use eight optimization algorithms to choose actions that meet project objectives and illustrate them in a simulated conservation project. The algorithms provide a taxonomy of decision making tools in conservation management when there is uncertainty surrounding competing models of system function. The algorithms build upon each other such that their differences are highlighted and practitioners may see where their decision making tools can be improved. © 2010 Elsevier Ltd.
Resumo:
Money is often a limiting factor in conservation, and attempting to conserve endangered species can be costly. Consequently, a framework for optimizing fiscally constrained conservation decisions for a single species is needed. In this paper we find the optimal budget allocation among isolated subpopulations of a threatened species to minimize local extinction probability. We solve the problem using stochastic dynamic programming, derive a useful and simple alternative guideline for allocating funds, and test its performance using forward simulation. The model considers subpopulations that persist in habitat patches of differing quality, which in our model is reflected in different relationships between money invested and extinction risk. We discover that, in most cases, subpopulations that are less efficient to manage should receive more money than those that are more efficient to manage, due to higher investment needed to reduce extinction risk. Our simple investment guideline performs almost as well as the exact optimal strategy. We illustrate our approach with a case study of the management of the Sumatran tiger, Panthera tigris sumatrae, in Kerinci Seblat National Park (KSNP), Indonesia. We find that different budgets should be allocated to the separate tiger subpopulations in KSNP. The subpopulation that is not at risk of extinction does not require any management investment. Based on the combination of risks of extinction and habitat quality, the optimal allocation for these particular tiger subpopulations is an unusual case: subpopulations that occur in higher-quality habitat (more efficient to manage) should receive more funds than the remaining subpopulation that is in lower-quality habitat. Because the yearly budget allocated to the KSNP for tiger conservation is small, to guarantee the persistence of all the subpopulations that are currently under threat we need to prioritize those that are easier to save. When allocating resources among subpopulations of a threatened species, the combined effects of differences in habitat quality, cost of action, and current subpopulation probability of extinction need to be integrated. We provide a useful guideline for allocating resources among isolated subpopulations of any threatened species. © 2010 by the Ecological Society of America.
Resumo:
This paper focuses on the finite element (FE) response sensitivity and reliability analyses considering smooth constitutive material models. A reinforced concrete frame is modeled for FE sensitivity analysis followed by direct differentiation method under both static and dynamic load cases. Later, the reliability analysis is performed to predict the seismic behavior of the frame. Displacement sensitivity discontinuities are observed along the pseudo-time axis using non-smooth concrete and reinforcing steel model under quasi-static loading. However, the smooth materials show continuity in response sensitivity at elastic to plastic transition points. The normalized sensitivity results are also used to measure the relative importance of the material parameters on the structural responses. In FE reliability analysis, the influence of smoothness behavior of reinforcing steel is carefully noticed. More efficient and reasonable reliability estimation can be achieved by using smooth material model compare with bilinear material constitutive model.
Optimum position of steel outrigger system for high rise composite buildings subjected to wind loads
Resumo:
The responses of composite buildings under wind loads clearly become more critical as the building becomes taller, less stiff and more lightweight. When the composite building increases in height, the stiffness of the structure becomes more important factor and introduction to belt truss and outrigger system is often used to provide sufficient lateral stiffness to the structure. Most of the research works to date is limited to reinforced concrete building with outrigger system of concrete structure, simple building plan layout, single height of a building, one direction wind and single level of outrigger arrangement. There is a scarcity in research works about the effective position of outrigger level on composite buildings under lateral wind loadings when the building plan layout, height and outrigger arrangement are varied. The aim of this paper is to determine the optimum location of steel belt and outrigger systems by using different arrangement of single and double level outrigger for different size, shape and height of composite building. In this study a comprehensive finite element modelling of composite building prototypes is carried out, with three different layouts (Rectangular, Octagonal and L shaped) and for three different storey (28, 42 and 57-storey). Models are analysed for dynamic cyclonic wind loads with various combination of steel belt and outrigger bracings. It is concluded that the effectiveness of the single and double level steel belt and outrigger bracing are varied based on their positions for different size, shape and height of composite building.
Resumo:
There are currently 23,500 level crossings in Australia, broadly divided into one of two categories: active level crossings which are fully automatic and have boom barriers, alarm bells, flashing lights, and pedestrian gates; and passive level crossings, which are not automatic and aim to control road and pedestrianised walkways solely with stop and give way signs. Active level crossings are considered to be the gold standard for transport ergonomics when grade separation (i.e. constructing an over- or underpass) is not viable. In Australia, the current strategy is to annually upgrade passive level crossings with active controls but active crossings are also associated with traffic congestion, largely as a result of extended closure times. The percentage of time level crossings are closed to road vehicles during peak periods increases with the rise in the frequency of train services. The popular perception appears to be that once a level crossing is upgraded, one is free to wipe their hands and consider the job done. However, there may also be environments where active protection is not enough, but where the setting may not justify the capital costs of grade separation. Indeed, the associated congestion and traffic delay could compromise safety by contributing to the risk taking behaviour by motorists and pedestrians. In these environments it is important to understand what human factor issues are present and ask the question of whether a one size fits all solution is indeed the most ergonomically sound solution for today’s transport needs.
Resumo:
Ectopic calcification (EC), which is the pathological deposition of calcium and phosphate in extra-skeletal tissues, may be associated with hypercalcaemic and hyperphosphataemic disorders, or it may occur in the absence of metabolic abnormalities. In addition, EC may be inherited as part of several monogenic disorders and studies of these have provided valuable insights into the metabolic pathways regulating mineral metabolism. For example, studies of tumoural calcinosis, a disorder characterised by hyperphosphataemia and progressive EC, have revealed mutations of fibroblast growth factor 23 (FGF23), polypeptide N-acetyl galactosaminyltransferase 3 (GALNT3) and klotho (KL), which are all part of a phosphate-regulating pathway. However, such studies in humans are limited by the lack of available large families with EC, and to facilitate such studies we assessed the progeny of mice treated with the chemical mutagen N-ethyl-N-nitrosourea (ENU) for EC. This identified two mutants with autosomal recessive forms of EC, and reduced lifespan, designated Ecalc1 and Ecalc2. Genetic mapping localized the Ecalc1 and Ecalc2 loci to a 11.0 Mb region on chromosome 5 that contained the klotho gene (Kl), and DNA sequence analysis identified nonsense (Gln203Stop) and missense (Ile604Asn) Kl mutations in Ecalc1 and Ecalc2 mice, respectively. The Gln203Stop mutation, located in KL1 domain, was severely hypomorphic and led to a 17-fold reduction of renal Kl expression. The Ile604Asn mutation, located in KL2 domain, was predicted to impair klotho protein stability and in vitro expression studies in COS-7 cells revealed endoplasmic reticulum retention of the Ile604Asn mutant. Further phenotype studies undertaken in Ecalc1 (kl203X/203X) mice demonstrated elevations in plasma concentrations of phosphate, FGF23 and 1,25-dihydroxyvitamin D. Thus, two allelic variants of Kl that develop EC and represent mouse models for tumoural calcinosis have been established. © 2015 Esapa et al.
Resumo:
This thesis presents a novel approach to building large-scale agent-based models of networked physical systems using a compositional approach to provide extensibility and flexibility in building the models and simulations. A software framework (MODAM - MODular Agent-based Model) was implemented for this purpose, and validated through simulations. These simulations allow assessment of the impact of technological change on the electricity distribution network looking at the trajectories of electricity consumption at key locations over many years.
Resumo:
Purpose – Business models to date have remained the creation of management, however, it is the belief of the authors that designers should be critically approaching, challenging and creating new business models as part of their practice. This belief portrays a new era where business model constructs become the new design brief of the future and fuel design and innovation to work together at the strategic level of an organisation. Design/methodology/approach – The purpose of this paper is to explore and investigate business model design. The research followed a deductive structured qualitative content analysis approach utilizing a predetermined categorization matrix. The analysis of forty business cases uncovered commonalities of key strategic drivers behind these innovative business models. Findings – Five business model typologies were derived from this content analysis, from which quick prototypes of new business models can be created. Research limitations/implications – Implications from this research suggest there is no “one right” model, but rather through experimentation, the generation of many unique and diverse concepts can result in greater possibilities for future innovation and sustained competitive advantage. Originality/value – This paper builds upon the emerging research and exploration into the importance and relevance of dynamic, design-driven approaches to the creation of innovative business models. These models aim to synthesize knowledge gained from real world examples into a tangible, accessible and provoking framework that provide new prototyping templates to aid the process of business model experimentation.
Resumo:
With the rapid development of various technologies and applications in smart grid implementation, demand response has attracted growing research interests because of its potentials in enhancing power grid reliability with reduced system operation costs. This paper presents a new demand response model with elastic economic dispatch in a locational marginal pricing market. It models system economic dispatch as a feedback control process, and introduces a flexible and adjustable load cost as a controlled signal to adjust demand response. Compared with the conventional “one time use” static load dispatch model, this dynamic feedback demand response model may adjust the load to a desired level in a finite number of time steps and a proof of convergence is provided. In addition, Monte Carlo simulation and boundary calculation using interval mathematics are applied for describing uncertainty of end-user's response to an independent system operator's expected dispatch. A numerical analysis based on the modified Pennsylvania-Jersey-Maryland power pool five-bus system is introduced for simulation and the results verify the effectiveness of the proposed model. System operators may use the proposed model to obtain insights in demand response processes for their decision-making regarding system load levels and operation conditions.