983 resultados para Fire design rules
Resumo:
P systems or Membrane Computing are a type of a distributed, massively parallel and non deterministic system based on biological membranes. They are inspired in the way cells process chemical compounds, energy and information. These systems perform a computation through transition between two consecutive configurations. As it is well known in membrane computing, a configuration consists in a m-tuple of multisets present at any moment in the existing m regions of the system at that moment time. Transitions between two configurations are performed by using evolution rules which are in each region of the system in a non-deterministic maximally parallel manner. This work is part of an exhaustive investigation line. The final objective is to implement a HW system that evolves as it makes a transition P-system. To achieve this objective, it has been carried out a division of this generic system in several stages, each of them with concrete matters. In this paper the stage is developed by obtaining the part of the system that is in charge of the application of the active rules. To count the number of times that the active rules is applied exist different algorithms. Here, it is presents an algorithm with improved aspects: the number of necessary iterations to reach the final values is smaller than the case of applying step to step each rule. Hence, the whole process requires a minor number of steps and, therefore, the end of the process will be reached in a shorter length of time.
Resumo:
Transition P systems are computational models based on basic features of biological membranes and the observation of biochemical processes. In these models, membrane contains objects multisets, which evolve according to given evolution rules. In the field of Transition P systems implementation, it has been detected the necessity to determine whichever time are going to take active evolution rules application in membranes. In addition, to have time estimations of rules application makes possible to take important decisions related to the hardware / software architectures design. In this paper we propose a new evolution rules application algorithm oriented towards the implementation of Transition P systems. The developed algorithm is sequential and, it has a linear order complexity in the number of evolution rules. Moreover, it obtains the smaller execution times, compared with the preceding algorithms. Therefore the algorithm is very appropriate for the implementation of Transition P systems in sequential devices.
Resumo:
Workflows are set of activities that implement and realise business goals. Modern business goals add extra requirements on workflow systems and their management. Workflows may cross many organisations and utilise services on a variety of devices and/or supported by different platforms. Current workflows are therefore inherently context-aware. Each context is governed and constrained by its own policies and rules to prevent unauthorised participants from executing sensitive tasks and also to prevent tasks from accessing unauthorised services and/or data. We present a sound and multi-layered design language for the design and analysis of secure and context aware workflows systems.
Resumo:
Бойко Бл. Банчев - Представена е обосновка и описание на език за програмиране в композиционен стил за опитни и учебни цели. Под “композиционен” имаме предвид функционален стил на програмиране, при който пресмятането е йерархия от композиции и прилагания на функции. Един от данновите типове на езика е този на геометричните фигури, които могат да бъдат получавани чрез прости правила за съотнасяне и така също образуват йерархични композиции. Езикът е силно повлиян от GeomLab, но по редица свойства се различава от него значително. Статията разглежда основните черти на езика; подробното му описание и фигурноконструктивните му възможности ще бъдат представени в съпътстваща публикация.
Resumo:
The span of control is the most discussed single concept in classical and modern management theory. In specifying conditions for organizational effectiveness, the span of control has generally been regarded as a critical factor. Existing research work has focused mainly on qualitative methods to analyze this concept, for example heuristic rules based on experiences and/or intuition. This research takes a quantitative approach to this problem and formulates it as a binary integer model, which is used as a tool to study the organizational design issue. This model considers a range of requirements affecting management and supervision of a given set of jobs in a company. These decision variables include allocation of jobs to workers, considering complexity and compatibility of each job with respect to workers, and the requirement of management for planning, execution, training, and control activities in a hierarchical organization. The objective of the model is minimal operations cost, which is the sum of supervision costs at each level of the hierarchy, and the costs of workers assigned to jobs. The model is intended for application in the make-to-order industries as a design tool. It could also be applied to make-to-stock companies as an evaluation tool, to assess the optimality of their current organizational structure. Extensive experiments were conducted to validate the model, to study its behavior, and to evaluate the impact of changing parameters with practical problems. This research proposes a meta-heuristic approach to solving large-size problems, based on the concept of greedy algorithms and the Meta-RaPS algorithm. The proposed heuristic was evaluated with two measures of performance: solution quality and computational speed. The quality is assessed by comparing the obtained objective function value to the one achieved by the optimal solution. The computational efficiency is assessed by comparing the computer time used by the proposed heuristic to the time taken by a commercial software system. Test results show the proposed heuristic procedure generates good solutions in a time-efficient manner.
Resumo:
Fire is a globally distributed disturbance that impacts terrestrial ecosystems and has been proposed to be a global “herbivore.” Fire, like herbivory, is a top-down driver that converts organic materials into inorganic products, alters community structure, and acts as an evolutionary agent. Though grazing and fire may have some comparable effects in grasslands, they do not have similar impacts on species composition and community structure. However, the concept of fire as a global herbivore implies that fire and herbivory may have similar effects on plant functional traits. Using 22 years of data from a mesic, native tallgrass prairie with a long evolutionary history of fire and grazing, we tested if trait composition between grazed and burned grassland communities would converge, and if the degree of convergence depended on fire frequency. Additionally, we tested if eliminating fire from frequently burned grasslands would result in a state similar to unburned grasslands, and if adding fire into a previously unburned grassland would cause composition to become more similar to that of frequently burned grasslands. We found that grazing and burning once every four years showed the most convergence in traits, suggesting that these communities operate under similar deterministic assembly rules and that fire and herbivory are similar disturbances to grasslands at the trait-group level of organization. Three years after reversal of the fire treatment we found that fire reversal had different effects depending on treatment. The formerly unburned community that was then burned annually became more similar to the annually burned community in trait composition suggesting that function may be rapidly restored if fire is reintroduced. Conversely, after fire was removed from the annually burned community trait composition developed along a unique trajectory indicating hysteresis, or a time lag for structure and function to return following a change in this disturbance regime. We conclude that functional traits and species-based metrics should be considered when determining and evaluating goals for fire management in mesic grassland ecosystems.
Resumo:
In fire-dependent forests, managers are interested in predicting the consequences of prescribed burning on postfire tree mortality. We examined the effects of prescribed fire on tree mortality in Florida Keys pine forests, using a factorial design with understory type, season, and year of burn as factors. We also used logistic regression to model the effects of burn season, fire severity, and tree dimensions on individual tree mortality. Despite limited statistical power due to problems in carrying out the full suite of planned experimental burns, associations with tree and fire variables were observed. Post-fire pine tree mortality was negatively correlated with tree size and positively correlated with char height and percent crown scorch. Unlike post-fire mortality, tree mortality associated with storm surge from Hurricane Wilma was greater in the large size classes. Due to their influence on population structure and fuel dynamics, the size-selective mortality patterns following fire and storm surge have practical importance for using fire as a management tool in Florida Keys pinelands in the future, particularly when the threats to their continued existence from tropical storms and sea level rise are expected to increase.
Resumo:
In this research, I analyze the effects of candidate nomination rules and campaign financing rules on elite recruitment into the national legislatures of Germany and the United States. This dissertation is both theory-driven and constitutes exploratory research, too. While the effects of electoral rules are frequently studied in political science, the emphasis is thereby on electoral rules that are set post-election. My focus, in contrast, is on electoral rules that have an effect prior to the election. Furthermore, my dissertation is comparative by design.^ The research question is twofold. Do electoral rules have an effect on elite recruitment, and does it matter? To answer these question, I create a large-N original data set, in which I code the behavior and recruitment paths and patterns of members of the American House of Representatives and the German Bundestag. Furthermore, I include interviews with members of the said two national legislatures. Both the statistical analyses and the interviews provide affirmative evidence for my working hypothesis that differences in electoral rules lead to a different type of elite recruitment. To that end, I use the active-politician concept, through which I dichotomously distinguish the economic behavior of politicians.^ Thanks to the exploratory nature of my research, I also discover the phenomenon of differential valence of local and state political office for entrance into national office in comparative perspective. By statistically identifying this hitherto unknown paradox, as well as evidencing the effects of electoral rules, I show that besides ideology and culture, institutional rules are key in shaping the ruling elite. The way institutional rules are set up, in particular electoral rules, does not only affect how the electorate will vote and how seats will be distributed, but it will also affect what type of people will end up in elected office.^
Resumo:
I explore and analyze a problem of finding the socially optimal capital requirements for financial institutions considering two distinct channels of contagion: direct exposures among the institutions, as represented by a network and fire sales externalities, which reflect the negative price impact of massive liquidation of assets.These two channels amplify shocks from individual financial institutions to the financial system as a whole and thus increase the risk of joint defaults amongst the interconnected financial institutions; this is often referred to as systemic risk. In the model, there is a trade-off between reducing systemic risk and raising the capital requirements of the financial institutions. The policymaker considers this trade-off and determines the optimal capital requirements for individual financial institutions. I provide a method for finding and analyzing the optimal capital requirements that can be applied to arbitrary network structures and arbitrary distributions of investment returns.
In particular, I first consider a network model consisting only of direct exposures and show that the optimal capital requirements can be found by solving a stochastic linear programming problem. I then extend the analysis to financial networks with default costs and show the optimal capital requirements can be found by solving a stochastic mixed integer programming problem. The computational complexity of this problem poses a challenge, and I develop an iterative algorithm that can be efficiently executed. I show that the iterative algorithm leads to solutions that are nearly optimal by comparing it with lower bounds based on a dual approach. I also show that the iterative algorithm converges to the optimal solution.
Finally, I incorporate fire sales externalities into the model. In particular, I am able to extend the analysis of systemic risk and the optimal capital requirements with a single illiquid asset to a model with multiple illiquid assets. The model with multiple illiquid assets incorporates liquidation rules used by the banks. I provide an optimization formulation whose solution provides the equilibrium payments for a given liquidation rule.
I further show that the socially optimal capital problem using the ``socially optimal liquidation" and prioritized liquidation rules can be formulated as a convex and convex mixed integer problem, respectively. Finally, I illustrate the results of the methodology on numerical examples and
discuss some implications for capital regulation policy and stress testing.
Resumo:
This paper presents a numerical study of a linear compressor cascade to investigate the effective end wall profiling rules for highly-loaded axial compressors. The first step in the research applies a correlation analysis for the different flow field parameters by a data mining over 600 profiling samples to quantify how variations of loss, secondary flow and passage vortex interact with each other under the influence of a profiled end wall. The result identifies the dominant role of corner separation for control of total pressure loss, providing a principle that only in the flow field with serious corner separation does the does the profiled end wall change total pressure loss, secondary flow and passage vortex in the same direction. Then in the second step, a multi-objective optimization of a profiled end wall is performed to reduce loss at design point and near stall point. The development of effective end wall profiling rules is based on the manner of secondary flow control rather than the geometry features of the end wall. Using the optimum end wall cases from the Pareto front, a quantitative tool for analyzing secondary flow control is employed. The driving force induced by a profiled end wall on different regions of end wall flow are subjected to a detailed analysis and identified for their positive/negative influences in relieving corner separation, from which the effective profiling rules are further confirmed. It is found that the profiling rules on a cascade show distinct differences at design point and near stall point, thus loss control of different operating points is generally independent.
Resumo:
Faced with a WTO in a state of paralysis, large developed trading nations have shifted their attentions to other fora to pursue their trade policy objectives. In particular, preferential trade agreements (PTAs) are now being used to promote the regulatory disciplines that were previously rejected by developing countries at the multilateral level. These so-called ‘deep’ or ‘21st century’ PTAs address a variety of issues, from technical norms, procurement, investment protection and intellectual property rights to social and environmental protection. Moreover, recently, developed countries have sought to negotiate PTAs which are large in scale, both in terms of economic size and geographical reach, including the so-called ‘mega-regional’ PTAs, such as the EU-US Transatlantic Trade and Investment Partnership, the EU-Japan PTA, the Transpacific Partnership, and the China-backed Regional Comprehensive Economic Partnership. These mega-regional PTAs are distinctive not just in terms of their sheer size and the breadth and depth of issues addressed, but also because some of their proponents readily admit that one of the central aims pursued by such agreements is to design global rules on new trade issues. In other words, these agreements are being conceived as alternatives to multilateral rule making at the WTO level. The proliferation of 21st century trade deals raises important questions concerning the continued relevance of the WTO as a global rule-making venue, and the impact that the regulatory disciplines promoted in such agreements will have on both developing and developed countries. This paper discusses the emerging features of an international trading system that is increasingly populated by large-scale PTAs and discusses some of the points of tension that arise from such practice. Firstly, it examines instances of horizontal tension resulting from the proliferation of PTAs, particularly the extent to which such PTAs represent a threat or multilateral trade governance. Secondly, it looks at an example of vertical tension by examining the manner in which the imposition of regulatory disciplines through trade agreements can undermine the ability of countries, especially developing countries, to pursue legitimate public interest objectives. Finally, the paper considers a number of steps that could be considered to address some of the adverse effects associated with the fragmentation of the international trading system, including the option of embracing variable geometry within the WTO framework and the need to develop mechanisms that provide flexibility for developing countries in the implementation of regulatory disciplines.
Resumo:
Although wildfire plays an important role in maintaining biodiversity in many ecosystems, fire management to protect human assets is often carried out by different agencies than those tasked for conserving biodiversity. In fact, fire risk reduction and biodiversity conservation are often viewed as competing objectives. Here we explored the role of management through private land conservation and asked whether we could identify private land acquisition strategies that fulfill the mutual objectives of biodiversity conservation and fire risk reduction, or whether the maximization of one objective comes at a detriment to the other. Using a fixed budget and number of homes slated for development, we simulated 20 years of housing growth under alternative conservation selection strategies, and then projected the mean risk of fires destroying structures and the area and configuration of important habitat types in San Diego County, California, USA. We found clear differences in both fire risk projections and biodiversity impacts based on the way conservation lands are prioritized for selection, but these differences were split between two distinct groupings. If no conservation lands were purchased, or if purchases were prioritized based on cost or likelihood of development, both the projected fire risk and biodiversity impacts were much higher than if conservation lands were purchased in areas with high fire hazard or high species richness. Thus, conserving land focused on either of the two objectives resulted in nearly equivalent mutual benefits for both. These benefits not only resulted from preventing development in sensitive areas, but they were also due to the different housing patterns and arrangements that occurred as development was displaced from those areas. Although biodiversity conflicts may still arise using other fire management strategies, this study shows that mutual objectives can be attained through land-use planning in this region. These results likely generalize to any place where high species richness overlaps with hazardous wildland vegetation.
Resumo:
Once the preserve of university academics and research laboratories with high-powered and expensive computers, the power of sophisticated mathematical fire models has now arrived on the desk top of the fire safety engineer. It is a revolution made possible by parallel advances in PC technology and fire modelling software. But while the tools have proliferated, there has not been a corresponding transfer of knowledge and understanding of the discipline from expert to general user. It is a serious shortfall of which the lack of suitable engineering courses dealing with the subject is symptomatic, if not the cause. The computational vehicles to run the models and an understanding of fire dynamics are not enough to exploit these sophisticated tools. Too often, they become 'black boxes' producing magic answers in exciting three-dimensional colour graphics and client-satisfying 'virtual reality' imagery. As well as a fundamental understanding of the physics and chemistry of fire, the fire safety engineer must have at least a rudimentary understanding of the theoretical basis supporting fire models to appreciate their limitations and capabilities. The five day short course, "Principles and Practice of Fire Modelling" run by the University of Greenwich attempt to bridge the divide between the expert and the general user, providing them with the expertise they need to understand the results of mathematical fire modelling. The course and associated text book, "Mathematical Modelling of Fire Phenomena" are aimed at students and professionals with a wide and varied background, they offer a friendly guide through the unfamiliar terrain of mathematical modelling. These concepts and techniques are introduced and demonstrated in seminars. Those attending also gain experience in using the methods during "hands-on" tutorial and workshop sessions. On completion of this short course, those participating should: - be familiar with the concept of zone and field modelling; - be familiar with zone and field model assumptions; - have an understanding of the capabilities and limitations of modelling software packages for zone and field modelling; - be able to select and use the most appropriate mathematical software and demonstrate their use in compartment fire applications; and - be able to interpret model predictions. The result is that the fire safety engineer is empowered to realise the full value of mathematical models to help in the prediction of fire development, and to determine the consequences of fire under a variety of conditions. This in turn enables him or her to design and implement safety measures which can potentially control, or at the very least reduce the impact of fire.
Resumo:
The FIREDASS (FIRE Detection And Suppression Simulation) project is concerned with the development of fine water mist systems as a possible replacement for the halon fire suppression system currently used in aircraft cargo holds. The project is funded by the European Commission, under the BRITE EURAM programme. The FIREDASS consortium is made up of a combination of Industrial, Academic, Research and Regulatory partners. As part of this programme of work, a computational model has been developed to help engineers optimise the design of the water mist suppression system. This computational model is based on Computational Fluid Dynamics (CFD) and is composed of the following components: fire model; mist model; two-phase radiation model; suppression model and detector/activation model. The fire model - developed by the University of Greenwich - uses prescribed release rates for heat and gaseous combustion products to represent the fire load. Typical release rates have been determined through experimentation conducted by SINTEF. The mist model - developed by the University of Greenwich - is a Lagrangian particle tracking procedure that is fully coupled to both the gas phase and the radiation field. The radiation model - developed by the National Technical University of Athens - is described using a six-flux radiation model. The suppression model - developed by SINTEF and the University of Greenwich - is based on an extinguishment crietrion that relies on oxygen concentration and temperature. The detector/ activation model - developed by Cerberus - allows the configuration of many different detector and mist configurations to be tested within the computational model. These sub-models have been integrated by the University of Greenwich into the FIREDASS software package. The model has been validated using data from the SINTEF/GEC test campaigns and it has been found that the computational model gives good agreement with these experimental results. The best agreement is obtained at the ceiling which is where the detectors and misting nozzles would be located in a real system. In this paper the model is briefly described and some results from the validation of the fire and mist model are presented.
Resumo:
Thesis (Master's)--University of Washington, 2016-06