938 resultados para Simulation flow
Resumo:
Freeways are divided roadways designed to facilitate the uninterrupted movement of motor vehicles. However, many freeways now experience demand flows in excess of capacity, leading to recurrent congestion. The Highway Capacity Manual (TRB, 1994) uses empirical macroscopic relationships between speed, flow and density to quantify freeway operations and performance. Capacity may be predicted as the maximum uncongested flow achievable. Although they are effective tools for design and analysis, macroscopic models lack an understanding of the nature of processes taking place in the system. Szwed and Smith (1972, 1974) and Makigami and Matsuo (1990) have shown that microscopic modelling is also applicable to freeway operations. Such models facilitate an understanding of the processes whilst providing for the assessment of performance, through measures of capacity and delay. However, these models are limited to only a few circumstances. The aim of this study was to produce more comprehensive and practical microscopic models. These models were required to accurately portray the mechanisms of freeway operations at the specific locations under consideration. The models needed to be able to be calibrated using data acquired at these locations. The output of the models needed to be able to be validated with data acquired at these sites. Therefore, the outputs should be truly descriptive of the performance of the facility. A theoretical basis needed to underlie the form of these models, rather than empiricism, which is the case for the macroscopic models currently used. And the models needed to be adaptable to variable operating conditions, so that they may be applied, where possible, to other similar systems and facilities. It was not possible to produce a stand-alone model which is applicable to all facilities and locations, in this single study, however the scene has been set for the application of the models to a much broader range of operating conditions. Opportunities for further development of the models were identified, and procedures provided for the calibration and validation of the models to a wide range of conditions. The models developed, do however, have limitations in their applicability. Only uncongested operations were studied and represented. Driver behaviour in Brisbane was applied to the models. Different mechanisms are likely in other locations due to variability in road rules and driving cultures. Not all manoeuvres evident were modelled. Some unusual manoeuvres were considered unwarranted to model. However the models developed contain the principal processes of freeway operations, merging and lane changing. Gap acceptance theory was applied to these critical operations to assess freeway performance. Gap acceptance theory was found to be applicable to merging, however the major stream, the kerb lane traffic, exercises only a limited priority over the minor stream, the on-ramp traffic. Theory was established to account for this activity. Kerb lane drivers were also found to change to the median lane where possible, to assist coincident mergers. The net limited priority model accounts for this by predicting a reduced major stream flow rate, which excludes lane changers. Cowan's M3 model as calibrated for both streams. On-ramp and total upstream flow are required as input. Relationships between proportion of headways greater than 1 s and flow differed for on-ramps where traffic leaves signalised intersections and unsignalised intersections. Constant departure onramp metering was also modelled. Minimum follow-on times of 1 to 1.2 s were calibrated. Critical gaps were shown to lie between the minimum follow-on time, and the sum of the minimum follow-on time and the 1 s minimum headway. Limited priority capacity and other boundary relationships were established by Troutbeck (1995). The minimum average minor stream delay and corresponding proportion of drivers delayed were quantified theoretically in this study. A simulation model was constructed to predict intermediate minor and major stream delays across all minor and major stream flows. Pseudo-empirical relationships were established to predict average delays. Major stream average delays are limited to 0.5 s, insignificant compared with minor stream delay, which reach infinity at capacity. Minor stream delays were shown to be less when unsignalised intersections are located upstream of on-ramps than signalised intersections, and less still when ramp metering is installed. Smaller delays correspond to improved merge area performance. A more tangible performance measure, the distribution of distances required to merge, was established by including design speeds. This distribution can be measured to validate the model. Merging probabilities can be predicted for given taper lengths, a most useful performance measure. This model was also shown to be applicable to lane changing. Tolerable limits to merging probabilities require calibration. From these, practical capacities can be estimated. Further calibration is required of traffic inputs, critical gap and minimum follow-on time, for both merging and lane changing. A general relationship to predict proportion of drivers delayed requires development. These models can then be used to complement existing macroscopic models to assess performance, and provide further insight into the nature of operations.
Resumo:
A trend in design and implementation of modern industrial automation systems is to integrate computing, communication and control into a unified framework at different levels of machine/factory operations and information processing. These distributed control systems are referred to as networked control systems (NCSs). They are composed of sensors, actuators, and controllers interconnected over communication networks. As most of communication networks are not designed for NCS applications, the communication requirements of NCSs may be not satisfied. For example, traditional control systems require the data to be accurate, timely and lossless. However, because of random transmission delays and packet losses, the control performance of a control system may be badly deteriorated, and the control system rendered unstable. The main challenge of NCS design is to both maintain and improve stable control performance of an NCS. To achieve this, communication and control methodologies have to be designed. In recent decades, Ethernet and 802.11 networks have been introduced in control networks and have even replaced traditional fieldbus productions in some real-time control applications, because of their high bandwidth and good interoperability. As Ethernet and 802.11 networks are not designed for distributed control applications, two aspects of NCS research need to be addressed to make these communication networks suitable for control systems in industrial environments. From the perspective of networking, communication protocols need to be designed to satisfy communication requirements for NCSs such as real-time communication and high-precision clock consistency requirements. From the perspective of control, methods to compensate for network-induced delays and packet losses are important for NCS design. To make Ethernet-based and 802.11 networks suitable for distributed control applications, this thesis develops a high-precision relative clock synchronisation protocol and an analytical model for analysing the real-time performance of 802.11 networks, and designs a new predictive compensation method. Firstly, a hybrid NCS simulation environment based on the NS-2 simulator is designed and implemented. Secondly, a high-precision relative clock synchronization protocol is designed and implemented. Thirdly, transmission delays in 802.11 networks for soft-real-time control applications are modeled by use of a Markov chain model in which real-time Quality-of- Service parameters are analysed under a periodic traffic pattern. By using a Markov chain model, we can accurately model the tradeoff between real-time performance and throughput performance. Furthermore, a cross-layer optimisation scheme, featuring application-layer flow rate adaptation, is designed to achieve the tradeoff between certain real-time and throughput performance characteristics in a typical NCS scenario with wireless local area network. Fourthly, as a co-design approach for both a network and a controller, a new predictive compensation method for variable delay and packet loss in NCSs is designed, where simultaneous end-to-end delays and packet losses during packet transmissions from sensors to actuators is tackled. The effectiveness of the proposed predictive compensation approach is demonstrated using our hybrid NCS simulation environment.
Resumo:
A Geant4 based simulation tool has been developed to perform Monte Carlo modelling of a 6 MV VarianTM iX clinac. The computer aided design interface of Geant4 was used to accurately model the LINAC components, including the Millenium multi-leaf collimators (MLCs). The simulation tool was verified via simulation of standard commissioning dosimetry data acquired with an ionisation chamber in a water phantom. Verification of the MLC model was achieved by simulation of leaf leakage measurements performed using GafchromicTM film in a solid water phantom. An absolute dose calibration capability was added by including a virtual monitor chamber into the simulation. Furthermore, a DICOM-RT interface was integrated with the application to allow the simulation of treatment plans in radiotherapy. The ability of the simulation tool to accurately model leaf movements and doses at each control point was verified by simulation of a widely used intensity-modulated radiation therapy (IMRT) quality assurance (QA) technique, the chair test.
Resumo:
Automated visual surveillance of crowds is a rapidly growing area of research. In this paper we focus on motion representation for the purpose of abnormality detection in crowded scenes. We propose a novel visual representation called textures of optical flow. The proposed representation measures the uniformity of a flow field in order to detect anomalous objects such as bicycles, vehicles and skateboarders; and can be combined with spatial information to detect other forms of abnormality. We demonstrate that the proposed approach outperforms state-of-the-art anomaly detection algorithms on a large, publicly-available dataset.
Resumo:
This paper presents a material model to simulate load induced cracking in Reinforced Concrete (RC) elements in ABAQUS finite element package. Two numerical material models are used and combined to simulate complete stress-strain behaviour of concrete under compression and tension including damage properties. Both numerical techniques used in the present material model are capable of developing the stress-strain curves including strain softening regimes only using ultimate compressive strength of concrete, which is easily and practically obtainable for many of the existing RC structures or those to be built. Therefore, the method proposed in this paper is valuable in assessing existing RC structures in the absence of more detailed test results. The numerical models are slightly modified from the original versions to be comparable with the damaged plasticity model used in ABAQUS. The model is validated using different experiment results for RC beam elements presented in the literature. The results indicate a good agreement with load vs. displacement curve and observed crack patterns.
Resumo:
Different international plant protection organisations advocate different schemes for conducting pest risk assessments. Most of these schemes use structured questionnaire in which experts are asked to score several items using an ordinal scale. The scores are then combined using a range of procedures, such as simple arithmetic mean, weighted averages, multiplication of scores, and cumulative sums. The most useful schemes will correctly identify harmful pests and identify ones that are not. As the quality of a pest risk assessment can depend on the characteristics of the scoring system used by the risk assessors (i.e., on the number of points of the scale and on the method used for combining the component scores), it is important to assess and compare the performance of different scoring systems. In this article, we proposed a new method for assessing scoring systems. Its principle is to simulate virtual data using a stochastic model and, then, to estimate sensitivity and specificity values from these data for different scoring systems. The interest of our approach was illustrated in a case study where several scoring systems were compared. Data for this analysis were generated using a probabilistic model describing the pest introduction process. The generated data were then used to simulate the outcome of scoring systems and to assess the accuracy of the decisions about positive and negative introduction. The results showed that ordinal scales with at most 5 or 6 points were sufficient and that the multiplication-based scoring systems performed better than their sum-based counterparts. The proposed method could be used in the future to assess a great diversity of scoring systems.
Resumo:
This is an invited presentation made as a short preview of the virtual environment research work being undertaken at QUT in the Business Process Management (BPM) research group, known as BPMVE. Three projects are covered, spatial process visualisation, with applications to airport check-in processes, collaborative process modelling using a virtual world BPMN editing tool and business process simulation in virtual worlds using Open Simulator and the YAWL workflow system. In addition, the relationship of this work to Organisational Psychology is briefly explored. Full Video/Audio is available at: http://www.youtube.com/user/BPMVE#p/u/1/rp506c3pPms
Resumo:
Climate change effects are expected to substantially raise the average sea level. It is widely assumed that this raise will have a severe adverse impact on saltwater intrusion processes in coastal aquifers. In this study we hypothesize that a natural mechanism, identified as the “lifting process” has the potential to mitigate or in some cases completely reverse the adverse intrusion effects induced by sea-level rise. A detailed numerical study using the MODFLOW-family computer code SEAWAT, was completed to test this hypothesis and to understand the effects of this lifting process in both confined and unconfined systems. Our conceptual simulation results show that if the ambient recharge remains constant, the sea-level rise will have no long-term impact (i.e., it will not affect the steady-state salt wedge) on confined aquifers. Our transient confined flow simulations show a self-reversal mechanism where the wedge which will initially intrude into the formation due to the sea-level rise would be naturally driven back to the original position. In unconfined systems, the lifting process would have a lesser influence due to changes in the value of effective transmissivity. A detailed sensitivity analysis was also completed to understand the sensitivity of this self-reversal effect to various aquifer parameters.
Resumo:
Free surface flow past a two-dimensional semi-infinite curved plate is considered, with emphasis given to solving for the shape of the resulting wave train that appears downstream on the surface of the fluid. This flow configuration can be interpreted as applying near the stern of a wide blunt ship. For steady flow in a fluid of finite depth, we apply the Wiener-Hopf technique to solve a linearised problem, valid for small perturbations of the uniform stream. Weakly nonlinear results found using a forced KdV equation are also presented, as are numerical solutions to the fully nonlinear problem, computed using a conformal mapping and a boundary integral technique. By considering different families of shapes for the semi-infinite plate, it is shown how the amplitude of the waves can be minimised. For plates that increase in height as a function of the direction of flow, reach a local maximum, and then point slightly downwards at the point at which the free surface detaches, it appears the downstream wavetrain can be eliminated entirely.
Resumo:
The structure and dynamics of a modern business environment are very hard to model using traditional methods. Such complexity raises challenges to effective business analysis and improvement. The importance of applying business process simulation to analyze and improve business activities has been widely recognized. However, one remaining challenge is the development of approaches to human resource behavior simulation. To address this problem, we describe a novel simulation approach where intelligent agents are used to simulate human resources by performing allocated work from a workflow management system. The behavior of the intelligent agents is driven a by state transition mechanism called a Hierarchical Task Network (HTN). We demonstrate and validate our simulator via a medical treatment process case study. Analysis of the simulation results shows that the behavior driven by the HTN is consistent with design of the workflow model. We believe these preliminary results support the development of more sophisticated agent-based human resource simulation systems.
Resumo:
The hydrodynamic behaviour of a novel flat plate photocatalytic reactor for water treatment is investigated using CFD code FLUENT. The reactor consists of a reactive section that features negligible pressure drop and uniform illumination of the photocatalyst to ensure enhanced photocatalytic efficiency. The numerical simulations allowed the identification of several design issues in the original reactor, which include extensive boundary layer separation near the photocatalyst support and regions of flow recirculation that render a significant portion of the reactive area. The simulations reveal that this issue could be addressed by selecting the appropriate inlet positions and configurations. This modification can cause minimal pressure drop across the reactive zone and achieves significant uniformization of the tested pollutant on the photocatalyst surface. The influence of roughness elements type has also been studied with a view to identify their role on the distribution of pollutant concentration on the photocatalyst surface. The results presented here indicate that the flow and pollutant concentration field strongly depend on the geometric parameters and flow conditions.
Resumo:
In order to tackle the growth of air travelers in airports worldwide, it is important to simulate and understand passenger flows to predict future capacity constraints and levels of service. We discuss the ability of agent-based models to understand complicated pedestrian movement in built environments. In this paper we propose advanced passenger traits to enable more detailed modelling of behaviors in terminal buildings, particularly in the departure hall around the check-in facilities. To demonstrate the concepts, we perform a series of passenger agent simulations in a virtual airport terminal. In doing so, we generate a spatial distribution of passengers within the departure hall to ancillary facilities such as cafes, information kiosks and phone booths as well as common check-in facilities, and observe the effects this has on passenger check-in and departure hall dwell times, and facility utilization.
Resumo:
In most materials, short stress waves are generated during the process of plastic deformation, phase transformation, crack formation and crack growth. These phenomena are applied in acoustic emission (AE) for the detection of material defects in a wide spectrum of areas, ranging from nondestructive testing for the detection of materials defects to monitoring of microseismical activity. AE technique is also used for defect source identification and for failure detection. AE waves consist of P waves (primary longitudinal waves), S waves (shear/transverse waves) and Rayleigh (surface) waves as well as reflected and diffracted waves. The propagation of AE waves in various modes has made the determination of source location difficult. In order to use acoustic emission technique for accurate identification of source, an understanding of wave propagation of the AE signals at various locations in a plate structure is essential. Furthermore, an understanding of wave propagation can also assist in sensor location for optimum detection of AE signals along with the characteristics of the source. In real life, as the AE signals radiate from the source it will result in stress waves. Unless the type of stress wave is known, it is very difficult to locate the source when using the classical propagation velocity equations. This paper describes the simulation of AE waves to identify the source location and its characteristics in steel plate as well as the wave modes. The finite element analysis (FEA) is used for the numerical simulation of wave propagation in thin plate. By knowing the type of wave generated, it is possible to apply the appropriate wave equations to determine the location of the source. For a single plate structure, the results show that the simulation algorithm is effective to simulate different stress waves.
Resumo:
This paper presents a study into the behaviour of extruded polystyrene foam at low strain rates. The foam is being studied in order assess its potential for use as part of a new innovative design of portable road safety barrier the aim to consume less water and reduce rates of serious injury. The foam was tested at a range of low strain rates, with the stress and strain behaviour of the foam specimens being recorded. The energy absorption capabilities of the foam were assessed as well as the response of the foam to multiple loadings. The experimental data was then used to create a material model of the foam for use in the explicit finite element solver LS-DYNA. Simulations were carried out using the material model which showed excellent correlation between the numerical material model and the experimental data.