960 resultados para Complex system


Relevância:

60.00% 60.00%

Publicador:

Resumo:

This paper presents a novel framework for the modelling of passenger facilitation in a complex environment. The research is motivated by the challenges in the airport complex system, where there are multiple stakeholders, differing operational objectives and complex interactions and interdependencies between different parts of the airport system. Traditional methods for airport terminal modelling do not explicitly address the need for understanding causal relationships in a dynamic environment. Additionally, existing Bayesian Network (BN) models, which provide a means for capturing causal relationships, only present a static snapshot of a system. A method to integrate a BN complex systems model with stochastic queuing theory is developed based on the properties of the Poisson and exponential distributions. The resultant Hybrid Queue-based Bayesian Network (HQBN) framework enables the simulation of arbitrary factors, their relationships, and their effects on passenger flow and vice versa. A case study implementation of the framework is demonstrated on the inbound passenger facilitation process at Brisbane International Airport. The predicted outputs of the model, in terms of cumulative passenger flow at intermediary and end points in the inbound process, are found to have an R2 goodness of fit of 0.9994 and 0.9982 respectively over a 10 h test period. The utility of the framework is demonstrated on a number of usage scenarios including causal analysis and ‘what-if’ analysis. This framework provides the ability to analyse and simulate a dynamic complex system, and can be applied to other socio-technical systems such as hospitals.

Relevância:

60.00% 60.00%

Publicador:

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The continuous changing impacts appeared in all solution understanding approaches in the projects management field (especially in the construction field of work) by adopting dynamic solution paths. The paper will define what argue to be a better relational model for project management constraints (time, cost, and scope). This new model will increase the success factors of any complex program / project. This is a qualitative research adopting a new avenue of investigation by following different approach of attributing project activities with social phenomena, and supporting phenomenon with field of observations rather than mathematical method by emerging solution from human, and ants' colonies successful practices. The results will show the correct approach of relation between the triple constraints considering the relation as multi agents system having specified communication channels based on agents locations. Information will be transferred between agents, and action would be taken based on constraint agents locations in the project structure allowing immediate changes abilities in order to overcome issues of over budget, behind schedule, and additional scope impact. This is complex adaptive system having self organizes technique, and cybernetic control. Resulted model can be used for improving existing project management methodologies.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

PURPOSE: This paper describes dynamic agent composition, used to support the development of flexible and extensible large-scale agent-based models (ABMs). This approach was motivated by a need to extend and modify, with ease, an ABM with an underlying networked structure as more information becomes available. Flexibility was also sought after so that simulations are set up with ease, without the need to program. METHODS: The dynamic agent composition approach consists in having agents, whose implementation has been broken into atomic units, come together at runtime to form the complex system representation on which simulations are run. These components capture information at a fine level of detail and provide a vast range of combinations and options for a modeller to create ABMs. RESULTS: A description of the dynamic agent composition is given in this paper, as well as details about its implementation within MODAM (MODular Agent-based Model), a software framework which is applied to the planning of the electricity distribution network. Illustrations of the implementation of the dynamic agent composition are consequently given for that domain throughout the paper. It is however expected that this approach will be beneficial to other problem domains, especially those with a networked structure, such as water or gas networks. CONCLUSIONS: Dynamic agent composition has many advantages over the way agent-based models are traditionally built for the users, the developers, as well as for agent-based modelling as a scientific approach. Developers can extend the model without the need to access or modify previously written code; they can develop groups of entities independently and add them to those already defined to extend the model. Users can mix-and-match already implemented components to form large-scales ABMs, allowing them to quickly setup simulations and easily compare scenarios without the need to program. The dynamic agent composition provides a natural simulation space over which ABMs of networked structures are represented, facilitating their implementation; and verification and validation of models is facilitated by quickly setting up alternative simulations.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This study focuses on trying to understand why the range of experience with respect to HIV infection is so diverse, especially as regards to the latency period. The challenge is to determine what assumptions can be made about the nature of the experience of antigenic invasion and diversity that can be modelled, tested and argued plausibly. To investigate this, an agent-based approach is used to extract high-level behaviour which cannot be described analytically from the set of interaction rules at the cellular level. A prototype model encompasses local variation in baseline properties contributing to the individual disease experience and is included in a network which mimics the chain of lymphatic nodes. Dealing with massively multi-agent systems requires major computational efforts. However, parallelisation methods are a natural consequence and advantage of the multi-agent approach. These are implemented using the MPI library.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Provision of network infrastructure to meet rising network peak demand is increasing the cost of electricity. Addressing this demand is a major imperative for Australian electricity agencies. The network peak demand model reported in this paper provides a quantified decision support tool and a means of understanding the key influences and impacts on network peak demand. An investigation of the system factors impacting residential consumers’ peak demand for electricity was undertaken in Queensland, Australia. Technical factors, such as the customers’ location, housing construction and appliances, were combined with social factors, such as household demographics, culture, trust and knowledge, and Change Management Options (CMOs) such as tariffs, price,managed supply, etc., in a conceptual ‘map’ of the system. A Bayesian network was used to quantify the model and provide insights into the major influential factors and their interactions. The model was also used to examine the reduction in network peak demand with different market-based and government interventions in various customer locations of interest and investigate the relative importance of instituting programs that build trust and knowledge through well designed customer-industry engagement activities. The Bayesian network was implemented via a spreadsheet with a tick box interface. The model combined available data from industry-specific and public sources with relevant expert opinion. The results revealed that the most effective intervention strategies involve combining particular CMOs with associated education and engagement activities. The model demonstrated the importance of designing interventions that take into account the interactions of the various elements of the socio-technical system. The options that provided the greatest impact on peak demand were Off-Peak Tariffs and Managed Supply and increases in the price of electricity. The impact in peak demand reduction differed for each of the locations and highlighted that household numbers, demographics as well as the different climates were significant factors. It presented possible network peak demand reductions which would delay any upgrade of networks, resulting in savings for Queensland utilities and ultimately for households. The use of this systems approach using Bayesian networks to assist the management of peak demand in different modelled locations in Queensland provided insights about the most important elements in the system and the intervention strategies that could be tailored to the targeted customer segments.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Australia is the world’s third largest exporter of raw sugar after Brazil and Thailand, with around $2.0 billion in export earnings. Transport systems play a vital role in the raw sugar production process by transporting the sugarcane crop between farms and mills. In 2013, 87 per cent of sugarcane was transported to mills by cane railway. The total cost of sugarcane transport operations is very high. Over 35% of the total cost of sugarcane production in Australia is incurred in cane transport. A cane railway network mainly involves single track sections and multiple track sections used as passing loops or sidings. The cane railway system performs two main tasks: delivering empty bins from the mill to the sidings for filling by harvesters; and collecting the full bins of cane from the sidings and transporting them to the mill. A typical locomotive run involves an empty train (locomotive and empty bins) departing from the mill, traversing some track sections and delivering bins at specified sidings. The locomotive then, returns to the mill, traversing the same track sections in reverse order, collecting full bins along the way. In practice, a single track section can be occupied by only one train at a time, while more than one train can use a passing loop (parallel sections) at a time. The sugarcane transport system is a complex system that includes a large number of variables and elements. These elements work together to achieve the main system objectives of satisfying both mill and harvester requirements and improving the efficiency of the system in terms of low overall costs. These costs include delay, congestion, operating and maintenance costs. An effective cane rail scheduler will assist the traffic officers at the mill to keep a continuous supply of empty bins to harvesters and full bins to the mill with a minimum cost. This paper addresses the cane rail scheduling problem under rail siding capacity constraints where limited and unlimited siding capacities were investigated with different numbers of trains and different train speeds. The total operating time as a function of the number of trains, train shifts and a limited number of cane bins have been calculated for the different siding capacity constraints. A mathematical programming approach has been used to develop a new scheduler for the cane rail transport system under limited and unlimited constraints. The new scheduler aims to reduce the total costs associated with the cane rail transport system that are a function of the number of bins and total operating costs. The proposed metaheuristic techniques have been used to find near optimal solutions of the cane rail scheduling problem and provide different possible solutions to avoid being stuck in local optima. A numerical investigation and sensitivity analysis study is presented to demonstrate that high quality solutions for large scale cane rail scheduling problems are obtainable in a reasonable time. Keywords: Cane railway, mathematical programming, capacity, metaheuristics

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Our research programme with elite athletes has investigated and implemented learning design from an ecological dynamics perspective, examining its effects on movement coordination and control and the acquisition of expertise. Ecological dynamics is a systemsoriented theoretical rationale for understanding the emergent relations in a complex system formed by each performer and a performance environment. This approach has identified the individual-environment relationship as the relevant scale of analysis for modelling how processes of perception, cognition and action underpin expert performance in sport (Davids et al., 2014; Zelaznik, 2014). In this chapter we elucidate key concepts from ecological dynamics and exemplify how they have informed our understanding of relevant psychological processes including: movement coordination and its acquisition, learning and transfer, impacting on practice task design in high performance programmes.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Jürgen Habermas’s concept of the public sphere remains a major building block for our understanding of public communication and deliberation. Yet ‘the’ public sphere is a construct of its time, and the mass media-dominated environment which it describes has given way to a considerably more fragmented and complex system of distinct and diverse, yet interconnected and overlapping publics that represent different themes, topics, and approaches to mediated communication. This chapter argues that moving beyond the orthodox model of the public sphere to a more dynamic and complex conceptual framework provides the opportunity to more clearly recognise the varying forms that public communication can take, especially online. Unpacking the traditional public sphere into a series of public sphericules and micro-publics, none of which are mutually exclusive but which co-exist, intersecting and overlapping in multiple forms, is crucial for understanding the ongoing structural transformation of ‘the’ public sphere.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This thesis reports on investigations into the influence of heat treatment on the manufacturing of oat flakes. Sources of variation in the oat flake quality are reviewed, including the whole chain from the farm to the consumer. The most important quality parameters of oat flakes are the absence of lipid hydrolysing enzymes, specific weight, thickness, breakage (fines), water absorption. Flavour, colour and pasting properties are also important, but were not included in the experimental part of this study. Of particular interest was the role of heat processing. The first possible heat treatment may occur already during grain drying, which in Finland generally happens at the farm. At the mill, oats are often kilned to stabilise the product by inactivating lipid hydrolysing enzymes. Almost invariably steaming is used during flaking, to soften the groats and reduce flake breakage. This thesis presents the use of a material science approach to investigating a complex system, typical of food processes. A combination of fundamental and empirical rheological measurements was used together with a laboratory scale process to simulate industrial processing. The results were verified by means of industrial trials. Industrially produced flakes at three thickness levels (nominally 0.75, 0.85 and 0.90 mm) were produced from kilned and unkilned oat groats, and the flake strength was measured at different moisture contents. Kilning was not found to significantly affect the force required to puncture a flake with a 2mm cylindrical probe, which was taken as a measure of flake strength. To further investigate how heat processing contributes to flake quality, dynamic mechanical analysis was used to characterise the effect of heat on the mechanical properties of oats. A marked stiffening of the groat, of up to about 50% increase in storage modulus, was observed during first heating at around 36 to 57°C. This was also observed in tablets prepared from ground groats and extracted oat starch. This stiffening was thus attributed to increased adhesion between starch granules. Groats were steamed in a laboratory steamer and were tempered in an oven at 80 110°C for 30 90 min. The maximum force required to compress the steamed groats to 50% strain increased from 50.7 N to 57.5 N as the tempering temperature was increased from 80 to 110°C. Tempering conditions also affected water absorption. A significantly higher moisture content was observed for kilned (18.9%) compared to unkilned (17.1%) groats, but otherwise had no effect on groat height, maximum force or final force after a 5 s relaxation time. Flakes were produced from the tempered groats using a laboratory flaking machine, using a roll gap of 0.4 mm. Apart from specific weight, flake properties were not influenced by kilning. Tempering conditions however had significant effects on the specific weight, thickness and water absorption of the flakes, as well as on the amount of fine material (<2 mm) produced during flaking. Flake strength correlated significantly with groat strength and flake thickness. Trial flaking at a commercial mill confirmed that groat temperature after tempering influenced water absorption. Variation in flake strength was observed , but at the groat temperatures required to inactivate lipase, it was rather small. Cold flaking of groats resulted in soft, floury flakes. The results presented in this thesis suggest that heating increased the adhesion between starch granules. This resulted in an increase in the stiffness and brittleness of the groat. Brittle fracture, rather than plastic flow, during flaking could result in flaws and cracks in the flake. These would be expected to increase water absorption. This was indeed observed as tempering temperature increased. Industrial trials, conducted with different groat temperatures, confirmed the main findings of the laboratory experiments. The approach used in the present study allowed the systematic study of the effect of interacting process parameters on product quality. There have been few scientific studies of oat processing, and these results can be used to understand the complex effects of process variables on flake quality. They also offer an insight into what happens as the oat groat is deformed into a flake.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This work proposes a supermarket optimization simulation model called Swarm-Moves is based on self organized complex system studies to identify parameters and their values that can influence customers to buy more on impulse in a given period of time. In the proposed model, customers are assumed to have trolleys equipped with technology like RFID that can aid the passing of products' information directly from the store to them in real-time and vice-versa. Therefore, they can get the information about other customers purchase patterns and constantly informing the store of their own shopping behavior. This can be easily achieved because the trolleys "know" what products they contain at any point. The Swarm-Moves simulation is the virtual supermarket providing the visual display to run and test the proposed model. The simulation is also flexible to incorporate any given model of customers' behavior tailored to particular supermarket, settings, events or promotions. The results, although preliminary, are promising to use RFID technology for marketing products in supermarkets and provide several dimensions to look for influencing customers via feedback, real-time marketing, target advertisement and on-demand promotions. ©2009 IEEE.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Mesoscale weather phenomena, such as the sea breeze circulation or lake effect snow bands, are typically too large to be observed at one point, yet too small to be caught in a traditional network of weather stations. Hence, the weather radar is one of the best tools for observing, analyzing and understanding their behavior and development. A weather radar network is a complex system, which has many structural and technical features to be tuned, from the location of each radar to the number of pulses averaged in the signal processing. These design parameters have no universal optimal values, but their selection depends on the nature of the weather phenomena to be monitored as well as on the applications for which the data will be used. The priorities and critical values are different for forest fire forecasting, aviation weather service or the planning of snow ploughing, to name a few radar-based applications. The main objective of the work performed within this thesis has been to combine knowledge of technical properties of the radar systems and our understanding of weather conditions in order to produce better applications able to efficiently support decision making in service duties for modern society related to weather and safety in northern conditions. When a new application is developed, it must be tested against ground truth . Two new verification approaches for radar-based hail estimates are introduced in this thesis. For mesoscale applications, finding the representative reference can be challenging since these phenomena are by definition difficult to catch with surface observations. Hence, almost any valuable information, which can be distilled from unconventional data sources such as newspapers and holiday shots is welcome. However, as important as getting data is to obtain estimates of data quality, and to judge to what extent the two disparate information sources can be compared. The presented new applications do not rely on radar data alone, but ingest information from auxiliary sources such as temperature fields. The author concludes that in the future the radar will continue to be a key source of data and information especially when used together in an effective way with other meteorological data.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The Earth s climate is a highly dynamic and complex system in which atmospheric aerosols have been increasingly recognized to play a key role. Aerosol particles affect the climate through a multitude of processes, directly by absorbing and reflecting radiation and indirectly by changing the properties of clouds. Because of the complexity, quantification of the effects of aerosols continues to be a highly uncertain science. Better understanding of the effects of aerosols requires more information on aerosol chemistry. Before the determination of aerosol chemical composition by the various available analytical techniques, aerosol particles must be reliably sampled and prepared. Indeed, sampling is one of the most challenging steps in aerosol studies, since all available sampling techniques harbor drawbacks. In this study, novel methodologies were developed for sampling and determination of the chemical composition of atmospheric aerosols. In the particle-into-liquid sampler (PILS), aerosol particles grow in saturated water vapor with further impaction and dissolution in liquid water. Once in water, the aerosol sample can then be transported and analyzed by various off-line or on-line techniques. In this study, PILS was modified and the sampling procedure was optimized to obtain less altered aerosol samples with good time resolution. A combination of denuders with different coatings was tested to adsorb gas phase compounds before PILS. Mixtures of water with alcohols were introduced to increase the solubility of aerosols. Minimum sampling time required was determined by collecting samples off-line every hour and proceeding with liquid-liquid extraction (LLE) and analysis by gas chromatography-mass spectrometry (GC-MS). The laboriousness of LLE followed by GC-MS analysis next prompted an evaluation of solid-phase extraction (SPE) for the extraction of aldehydes and acids in aerosol samples. These two compound groups are thought to be key for aerosol growth. Octadecylsilica, hydrophilic-lipophilic balance (HLB), and mixed phase anion exchange (MAX) were tested as extraction materials. MAX proved to be efficient for acids, but no tested material offered sufficient adsorption for aldehydes. Thus, PILS samples were extracted only with MAX to guarantee good results for organic acids determined by liquid chromatography-mass spectrometry (HPLC-MS). On-line coupling of SPE with HPLC-MS is relatively easy, and here on-line coupling of PILS with HPLC-MS through the SPE trap produced some interesting data on relevant acids in atmospheric aerosol samples. A completely different approach to aerosol sampling, namely, differential mobility analyzer (DMA)-assisted filter sampling, was employed in this study to provide information about the size dependent chemical composition of aerosols and understanding of the processes driving aerosol growth from nano-size clusters to climatically relevant particles (>40 nm). The DMA was set to sample particles with diameters of 50, 40, and 30 nm and aerosols were collected on teflon or quartz fiber filters. To clarify the gas-phase contribution, zero gas-phase samples were collected by switching off the DMA every other 15 minutes. Gas-phase compounds were adsorbed equally well on both types of filter, and were found to contribute significantly to the total compound mass. Gas-phase adsorption is especially significant during the collection of nanometer-size aerosols and needs always to be taken into account. Other aims of this study were to determine the oxidation products of β-caryophyllene (the major sesquiterpene in boreal forest) in aerosol particles. Since reference compounds are needed for verification of the accuracy of analytical measurements, three oxidation products of β-caryophyllene were synthesized: β-caryophyllene aldehyde, β-nocaryophyllene aldehyde, and β-caryophyllinic acid. All three were identified for the first time in ambient aerosol samples, at relatively high concentrations, and their contribution to the aerosol mass (and probably growth) was concluded to be significant. Methodological and instrumental developments presented in this work enable fuller understanding of the processes behind biogenic aerosol formation and provide new tools for more precise determination of biosphere-atmosphere interactions.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The study scrutinizes the dynamics of the Finnish higher education political system. Dynamics is understood as the regularity of interaction between actors. By actors is meant the central institutions in the system. The theoretical framework of the study draws on earlier research in political science and higher education political studies. The theoretical model for analysis is built on agenda-setting theories. The theoretical model separates two dimensions of dynamics, namely the political situation and political possibilities. A political situation can be either favourable or contradictory to change. If the institutional framework within the higher education system is not compatible with the external factors of the system, the political situation is contradictory to change. To change the situation into a favourable one, one needs either to change the institutional structure or wait for external factors to change. Then again, the political possibilities can be either settled or politicized. Politicization means that new possibilities for action are found. Settled possibilities refer to routine actions performed according to old practices. The research tasks based on the theoretical model are: 1. To empirically analyse the political situation and the possibilities from the actors point of view. 2. To theoretically construct and empirically test a model for analysis of dynamics in the Finnish higher education politics. The research material consists of 25 thematic interviews with key persons in the higher education political system in 2008. In addition, there are also documents from different actors since the 1980s and statistical data. The material is analysed in four phases. In the first phase the emphasis is on trying to understand the interviewees and actors points of view. In the second phase the different types of research material are related to each other. In the third phase the findings are related to the theoretical model, which is constructed over the course of the analysis. In the fourth phase the interpretation is tested. The research distinguishes three historical periods in the Finnish higher education system and focuses on the last one. This is the era of the complex system beginning in the 1980s 1990s. Based on the interviews, four policy threads are identified and analysed in their historical context. Each of the policy threads represents one of the four possible dynamics identified in the theoretical model. The research policy thread functions according to reform dynamics. A coalition of innovation politics is able to use the politicized possibilities due to the political situation created by the conception of the national innovation system. The regional policy thread is in a gridlock dynamics. The combination of a political system based on provincial representation, a regional higher education institutional framework and outside pressure to streamline the higher education structure created a contradictory political situation. Because of this situation, the politicized possibilities in the so-called "regional development plan" do not have much effect. In the international policy thread, a consensual change dynamics is found. Through changes in the institutional framework, the higher education political system is moulded into a favourable situation. However, the possibilities are settled: a pragmatic national gaze prevailed. A dynamics of friction is found in the governance policy thread. A political situation where political-strategic and budgetary decision-making are separated is not favourable for change. In addition, as governance policy functions according to settled possibilities, the situation seems unchangeable. There are five central findings. First, the dynamics are different depending on the policy thread under scrutiny. Second, the settled possibilities in a policy thread seemed to influence other threads the most. Third, dynamics are much related to changes external to the higher education political system, the changing positions of the actors in different policy threads and the unexpected nature of the dynamics. Fourth, it is fruitful to analyse the dynamics with the theoretical model. Fifth, but only hypothetically and thus left for further research, it seems that the Finnish higher education politics is reactive and weak at politicization.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Today's feature-rich multimedia products require embedded system solution with complex System-on-Chip (SoC) to meet market expectations of high performance at a low cost and lower energy consumption. The memory architecture of the embedded system strongly influences critical system design objectives like area, power and performance. Hence the embedded system designer performs a complete memory architecture exploration to custom design a memory architecture for a given set of applications. Further, the designer would be interested in multiple optimal design points to address various market segments. However, tight time-to-market constraints enforces short design cycle time. In this paper we address the multi-level multi-objective memory architecture exploration problem through a combination of exhaustive-search based memory exploration at the outer level and a two step based integrated data layout for SPRAM-Cache based architectures at the inner level. We present a two step integrated approach for data layout for SPRAM-Cache based hybrid architectures with the first step as data-partitioning that partitions data between SPRAM and Cache, and the second step is the cache conscious data layout. We formulate the cache-conscious data layout as a graph partitioning problem and show that our approach gives up to 34% improvement over an existing approach and also optimizes the off-chip memory address space. We experimented our approach with 3 embedded multimedia applications and our approach explores several hundred memory configurations for each application, yielding several optimal design points in a few hours of computation on a standard desktop.