121 resultados para delays
Resumo:
The concept of moving block signallings (MBS) has been adopted in a few mass transit railway systems. When a dense queue of trains begins to move from a complete stop, the trains can re-start in very close succession under MBS. The feeding substations nearby are likely to be overloaded and the service will inevitably be disturbed unless substations of higher power rating are used. By introducing starting time delays among the trains or limiting the trains’ acceleration rate to a certain extent, the peak energy demand can be contained. However, delay is introduced and quality of service is degraded. An expert system approach is presented to provide a supervisory tool for the operators. As the knowledge base is vital for the quality of decisions to be made, the study focuses on its formulation with a balance between delay and peak power demand.
Resumo:
Traffic conflicts at railway junctions are very conmon, particularly on congested rail lines. While safe passage through the junction is well maintained by the signalling and interlocking systems, minimising the delays imposed on the trains by assigning the right-of-way sequence sensibly is a bonus to the quality of service. A deterministic method has been adopted to resolve the conflict, with the objective of minimising the total weighted delay. However, the computational demand remains significant. The applications of different heuristic methods to tackle this problem are reviewed and explored, elaborating their feasibility in various aspects and comparing their relative merits for further studies. As most heuristic methods do not guarantee a global optimum, this study focuses on the trade-off between computation time and optimality of the resolution.
Resumo:
Conflict occurs when two or more trains approach the same junction within a specified time. Such conflicts result in delays. Current practices to assign the right of way at junctions achieve orderly and safe passage of the trains, but do not attempt to reduce the delays. A traffic controller developed in the paper assigns right of way to impose minimum total weighted delay on the trains. The traffic flow model and the optimisation technique used in this controller are described. Simulation studies of the performance of the controller are given.
Resumo:
Fuzzy logic has been applied to control traffic at road junctions. A simple controller with one fixed rule-set is inadequate to minimise delays when traffic flow rate is time-varying and likely to span a wide range. To achieve better control, fuzzy rules adapted to the current traffic conditions are used.
Resumo:
Robustness of the track allocation problem is rarely addressed in literatures and the obtained track allocation schemes (TAS) embody some bottlenecks. Therefore, an approach to detect bottlenecks is needed to support local optimization. First a TAS is transformed to an executable model by Petri nets. Then disturbances analysis is performed using the model and the indicators of the total trains' departure delays are collected to detect bottlenecks when each train suffers a disturbance. Finally, the results of the tests based on a rail hub linking six lines and a TAS about thirty minutes show that the minimum buffer time is 21 seconds and there are two bottlenecks where the buffer times are 57 and 44 seconds respectively, and it indicates that the bottlenecks do not certainly locate at the area where there is minimum buffer time. The proposed approach can further support selection of multi schemes and robustness optimization.
Resumo:
Purpose of study: Traffic conflicts occur when trains on different routes approach a converging junction in a railway network at the same time. To prevent collisions, a right-of-way assignment is needed to control the order in which the trains should pass the junction. Such control action inevitably requires the braking and/or stopping of trains, which lengthens their travelling times and leads to delays. Train delays cause a loss of punctuality and hence directly affect the quality of service. It is therefore important to minimise the delays by devising a suitable right-of-way assignment. One of the major difficulties in attaining the optimal right-of-way assignment is that the number of feasible assignments increases dramatically with the number of trains. Connected-junctions further complicate the problem. Exhaustive search for the optimal solution is time-consuming and infeasible for area control (multi-junction). Even with the more intelligent deterministic optimisation method revealed in [1], the computation demand is still considerable, which hinders real-time control. In practice, as suggested in [2], the optimality may be traded off by shorter computation time, and heuristic searches provide alternatives for this optimisation problem.
Resumo:
The detached housing scheme is a unique and exclusive segment of the residential property market in Malaysia. Generally, the product is expensive and for many Malaysians who can afford them, owning a detached house is a once in a lifetime opportunity. In spite of this, most of the owners fail to fully comprehend the specific need of this type of housing scheme, increasing the risk of it being a problematic project. Unlike other types of pre-designed ‘mass housing’ schemes, the detached housing scheme may be built specifically to cater the needs and demands of its owner. Therefore, maximum owner participation is vital as the development progresses to guarantee the success of the project. In addition, due to it’s unique design the house would have to individually comply with the requirements and regulations of relevant authorities. Failure of owner to recognise this will result in delays, fines and penalties, disputes and ultimately cost overruns. These circumstances highlight the need for a model to guide the owner through the entire development process of a detached house. Therefore, this research aims to develop a model for a successful detached housing development in Malaysia through maximising owner participation during it’s various development stages. To achieve this, questionnaire surveys and case studies methods shall be employed to acquire the detached housing owners’ experiences in developing their detached houses in Malaysia. Relevant statistical tools shall be applied to analyse the responses. The results gained from this study shall be synthesised into a model of successful detached housing development for the reference of future detached housing owners in Malaysia.
Resumo:
This paper proposes a train movement model with fixed runtime that can be employed to find feasible control strategies for a single train along an inter-city railway line. The objective of the model is to minimize arrival delays at each station along railway lines. However, train movement is a typical nonlinear problem for complex running environments and different requirements. A heuristic algorithm is developed to solve the problem in this paper and the simulation results show that the train could overcome the disturbance from train delay and coordinates the operation strategies to sure punctual arrival of trains at the destination. The developed algorithm can also be used to evaluate the running reliability of trains in scheduled timetables.
Resumo:
The problem of delays in the construction industry is a global phenomenon and the construction industry in Brunei Darussalam is no exception. The goal of all parties involved in construction projects – owners, contractors, engineers and consultants in either the public or private sector is to successfully complete the project on schedule, within planned budget, with the highest quality and in the safest manner. Construction projects are frequently influenced by either success factors that help project parties reach their goal as planned, or delay factors that stifle or postpone project completion. The purpose of this research is to identify success and delay factors which can help project parties reach their intended goals with greater efficiency. This research extracted seven of the most important success factors according to the literature and seven of the most important delay factors identified by project parties, and then examined correlations between them to determine which were the most influential in preventing project delays. This research uses a comprehensive literature review to design and conduct a survey to investigate success and delay factors and then obtain a consensus of expert opinion using the Delphi methodology to rank the most needed critical success factors for Brunei construction projects. A specific survey was distributed to owners, contractors and engineers to examine the most critical delay factors. A general survey was distributed to examine the correlation between the identified delay factors and the seven most important critical success factors selected. A consensus of expert opinion using the Delphi methodology was used to rank the most needed critical success factors for Brunei building construction. Data was collected and evaluated by statistical methods to identify the most significant causes of delay and to measure the strength and direction of the relationship between critical success factors and delay factors in order to examine project parties’ evaluation of projects’ critical success and delay factors, and to evaluate the influence of critical success factors on critical delay factors. A relative importance index has been used to determine the relative importance of the various causes of delays. A one and two-way analysis of variance (ANOVA) has been used to examine how the group or groups evaluated the influence of the critical success factors in avoiding or preventing each of the delay factors, and which success factors were perceived as most influential in avoiding or preventing critical delay factors. Finally the Delphi method, using consensus from an expert panel, was employed to identify the seven most critical success factors used to avoid the delay factors, and thereby improve project performance.
Resumo:
Particulate pollution has been widely recognised as an important risk factor to human health. In addition to increases in respiratory and cardiovascular morbidity associated with exposure to particulate matter (PM), WHO estimates that urban PM causes 0.8 million premature deaths globally and that 1.5 million people die prematurely from exposure to indoor smoke generated from the combustion of solid fuels. Despite the availability of a huge body of research, the underlying toxicological mechanisms by which particles induce adverse health effects are not yet entirely understood. Oxidative stress caused by generation of free radicals and related reactive oxygen species (ROS) at the sites of deposition has been proposed as a mechanism for many of the adverse health outcomes associated with exposure to PM. In addition to particle-induced generation of ROS in lung tissue cells, several recent studies have shown that particles may also contain ROS. As such, they present a direct cause of oxidative stress and related adverse health effects. Cellular responses to oxidative stress have been widely investigated using various cell exposure assays. However, for a rapid screening of the oxidative potential of PM, less time-consuming and less expensive, cell-free assays are needed. The main aim of this research project was to investigate the application of a novel profluorescent nitroxide probe, synthesised at QUT, as a rapid screening assay in assessing the oxidative potential of PM. Considering that this was the first time that a profluorescent nitroxide probe was applied in investigating the oxidative stress potential of PM, the proof of concept regarding the detection of PM–derived ROS by using such probes needed to be demonstrated and a sampling methodology needed to be developed. Sampling through an impinger containing profluorescent nitroxide solution was chosen as a means of particle collection as it allowed particles to react with the profluorescent nitroxide probe during sampling, avoiding in that way any possible chemical changes resulting from delays between the sampling and the analysis of the PM. Among several profluorescent nitroxide probes available at QUT, bis(phenylethynyl)anthracene-nitroxide (BPEAnit) was found to be the most suitable probe, mainly due to relatively long excitation and emission wavelengths (λex= 430 nm; λem= 485 and 513 nm). These wavelengths are long enough to avoid overlap with the background fluorescence coming from light absorbing compounds which may be present in PM (e.g. polycyclic aromatic hydrocarbons and their derivatives). Given that combustion, in general, is one of the major sources of ambient PM, this project aimed at getting an insight into the oxidative stress potential of combustion-generated PM, namely cigarette smoke, diesel exhaust and wood smoke PM. During the course of this research project, it was demonstrated that the BPEAnit probe based assay is sufficiently sensitive and robust enough to be applied as a rapid screening test for PM-derived ROS detection. Considering that for all three aerosol sources (i.e. cigarette smoke, diesel exhaust and wood smoke) the same assay was applied, the results presented in this thesis allow direct comparison of the oxidative potential measured for all three sources of PM. In summary, it was found that there was a substantial difference between the amounts of ROS per unit of PM mass (ROS concentration) for particles emitted by different combustion sources. For example, particles from cigarette smoke were found to have up to 80 times less ROS per unit of mass than particles produced during logwood combustion. For both diesel and wood combustion it has been demonstrated that the type of fuel significantly affects the oxidative potential of the particles emitted. Similarly, the operating conditions of the combustion source were also found to affect the oxidative potential of particulate emissions. Moreover, this project has demonstrated a strong link between semivolatile (i.e. organic) species and ROS and therefore, clearly highlights the importance of semivolatile species in particle-induced toxicity.
Resumo:
Real‐time kinematic (RTK) GPS techniques have been extensively developed for applications including surveying, structural monitoring, and machine automation. Limitations of the existing RTK techniques that hinder their applications for geodynamics purposes are twofold: (1) the achievable RTK accuracy is on the level of a few centimeters and the uncertainty of vertical component is 1.5–2 times worse than those of horizontal components and (2) the RTK position uncertainty grows in proportional to the base‐torover distances. The key limiting factor behind the problems is the significant effect of residual tropospheric errors on the positioning solutions, especially on the highly correlated height component. This paper develops the geometry‐specified troposphere decorrelation strategy to achieve the subcentimeter kinematic positioning accuracy in all three components. The key is to set up a relative zenith tropospheric delay (RZTD) parameter to absorb the residual tropospheric effects and to solve the established model as an ill‐posed problem using the regularization method. In order to compute a reasonable regularization parameter to obtain an optimal regularized solution, the covariance matrix of positional parameters estimated without the RZTD parameter, which is characterized by observation geometry, is used to replace the quadratic matrix of their “true” values. As a result, the regularization parameter is adaptively computed with variation of observation geometry. The experiment results show that new method can efficiently alleviate the model’s ill condition and stabilize the solution from a single data epoch. Compared to the results from the conventional least squares method, the new method can improve the longrange RTK solution precision from several centimeters to the subcentimeter in all components. More significantly, the precision of the height component is even higher. Several geosciences applications that require subcentimeter real‐time solutions can largely benefit from the proposed approach, such as monitoring of earthquakes and large dams in real‐time, high‐precision GPS leveling and refinement of the vertical datum. In addition, the high‐resolution RZTD solutions can contribute to effective recovery of tropospheric slant path delays in order to establish a 4‐D troposphere tomography.
Resumo:
Provisional supervision (PS) is Hong Kong’s proposed new corporate rescue procedure. In essence, it is a procedure for the preparation by a professional, usually an accountant or a solicitor, of a proposal for a voluntary arrangement, supported by a moratorium. There should be little court involvement in the process and it is anticipated that the costs and delays of the process would be less than alternate, currently available procedures. This article will retrace some of the key events and issues arising from the numerous policy and legislative debates about PS in Hong Kong. At present the Hong Kong government is in the midst of drafting a new Bill on corporate rescue procedure to be introduced to the HKSAR Legislative Council. This will be the third attempt. Setting aside the controversies and the content of this new effort by the Hong Kong administration, the Global Financial Crisis in 2008 has signalled to the international policy and business community, free markets alone cannot be an effective regulatory mechanism. Having legal safeguards and clear rules to regulate procedures and conduct of market participants are imperative to avoid future financial meltdowns.
Resumo:
Freeways are divided roadways designed to facilitate the uninterrupted movement of motor vehicles. However, many freeways now experience demand flows in excess of capacity, leading to recurrent congestion. The Highway Capacity Manual (TRB, 1994) uses empirical macroscopic relationships between speed, flow and density to quantify freeway operations and performance. Capacity may be predicted as the maximum uncongested flow achievable. Although they are effective tools for design and analysis, macroscopic models lack an understanding of the nature of processes taking place in the system. Szwed and Smith (1972, 1974) and Makigami and Matsuo (1990) have shown that microscopic modelling is also applicable to freeway operations. Such models facilitate an understanding of the processes whilst providing for the assessment of performance, through measures of capacity and delay. However, these models are limited to only a few circumstances. The aim of this study was to produce more comprehensive and practical microscopic models. These models were required to accurately portray the mechanisms of freeway operations at the specific locations under consideration. The models needed to be able to be calibrated using data acquired at these locations. The output of the models needed to be able to be validated with data acquired at these sites. Therefore, the outputs should be truly descriptive of the performance of the facility. A theoretical basis needed to underlie the form of these models, rather than empiricism, which is the case for the macroscopic models currently used. And the models needed to be adaptable to variable operating conditions, so that they may be applied, where possible, to other similar systems and facilities. It was not possible to produce a stand-alone model which is applicable to all facilities and locations, in this single study, however the scene has been set for the application of the models to a much broader range of operating conditions. Opportunities for further development of the models were identified, and procedures provided for the calibration and validation of the models to a wide range of conditions. The models developed, do however, have limitations in their applicability. Only uncongested operations were studied and represented. Driver behaviour in Brisbane was applied to the models. Different mechanisms are likely in other locations due to variability in road rules and driving cultures. Not all manoeuvres evident were modelled. Some unusual manoeuvres were considered unwarranted to model. However the models developed contain the principal processes of freeway operations, merging and lane changing. Gap acceptance theory was applied to these critical operations to assess freeway performance. Gap acceptance theory was found to be applicable to merging, however the major stream, the kerb lane traffic, exercises only a limited priority over the minor stream, the on-ramp traffic. Theory was established to account for this activity. Kerb lane drivers were also found to change to the median lane where possible, to assist coincident mergers. The net limited priority model accounts for this by predicting a reduced major stream flow rate, which excludes lane changers. Cowan's M3 model as calibrated for both streams. On-ramp and total upstream flow are required as input. Relationships between proportion of headways greater than 1 s and flow differed for on-ramps where traffic leaves signalised intersections and unsignalised intersections. Constant departure onramp metering was also modelled. Minimum follow-on times of 1 to 1.2 s were calibrated. Critical gaps were shown to lie between the minimum follow-on time, and the sum of the minimum follow-on time and the 1 s minimum headway. Limited priority capacity and other boundary relationships were established by Troutbeck (1995). The minimum average minor stream delay and corresponding proportion of drivers delayed were quantified theoretically in this study. A simulation model was constructed to predict intermediate minor and major stream delays across all minor and major stream flows. Pseudo-empirical relationships were established to predict average delays. Major stream average delays are limited to 0.5 s, insignificant compared with minor stream delay, which reach infinity at capacity. Minor stream delays were shown to be less when unsignalised intersections are located upstream of on-ramps than signalised intersections, and less still when ramp metering is installed. Smaller delays correspond to improved merge area performance. A more tangible performance measure, the distribution of distances required to merge, was established by including design speeds. This distribution can be measured to validate the model. Merging probabilities can be predicted for given taper lengths, a most useful performance measure. This model was also shown to be applicable to lane changing. Tolerable limits to merging probabilities require calibration. From these, practical capacities can be estimated. Further calibration is required of traffic inputs, critical gap and minimum follow-on time, for both merging and lane changing. A general relationship to predict proportion of drivers delayed requires development. These models can then be used to complement existing macroscopic models to assess performance, and provide further insight into the nature of operations.
Resumo:
A trend in design and implementation of modern industrial automation systems is to integrate computing, communication and control into a unified framework at different levels of machine/factory operations and information processing. These distributed control systems are referred to as networked control systems (NCSs). They are composed of sensors, actuators, and controllers interconnected over communication networks. As most of communication networks are not designed for NCS applications, the communication requirements of NCSs may be not satisfied. For example, traditional control systems require the data to be accurate, timely and lossless. However, because of random transmission delays and packet losses, the control performance of a control system may be badly deteriorated, and the control system rendered unstable. The main challenge of NCS design is to both maintain and improve stable control performance of an NCS. To achieve this, communication and control methodologies have to be designed. In recent decades, Ethernet and 802.11 networks have been introduced in control networks and have even replaced traditional fieldbus productions in some real-time control applications, because of their high bandwidth and good interoperability. As Ethernet and 802.11 networks are not designed for distributed control applications, two aspects of NCS research need to be addressed to make these communication networks suitable for control systems in industrial environments. From the perspective of networking, communication protocols need to be designed to satisfy communication requirements for NCSs such as real-time communication and high-precision clock consistency requirements. From the perspective of control, methods to compensate for network-induced delays and packet losses are important for NCS design. To make Ethernet-based and 802.11 networks suitable for distributed control applications, this thesis develops a high-precision relative clock synchronisation protocol and an analytical model for analysing the real-time performance of 802.11 networks, and designs a new predictive compensation method. Firstly, a hybrid NCS simulation environment based on the NS-2 simulator is designed and implemented. Secondly, a high-precision relative clock synchronization protocol is designed and implemented. Thirdly, transmission delays in 802.11 networks for soft-real-time control applications are modeled by use of a Markov chain model in which real-time Quality-of- Service parameters are analysed under a periodic traffic pattern. By using a Markov chain model, we can accurately model the tradeoff between real-time performance and throughput performance. Furthermore, a cross-layer optimisation scheme, featuring application-layer flow rate adaptation, is designed to achieve the tradeoff between certain real-time and throughput performance characteristics in a typical NCS scenario with wireless local area network. Fourthly, as a co-design approach for both a network and a controller, a new predictive compensation method for variable delay and packet loss in NCSs is designed, where simultaneous end-to-end delays and packet losses during packet transmissions from sensors to actuators is tackled. The effectiveness of the proposed predictive compensation approach is demonstrated using our hybrid NCS simulation environment.
Resumo:
New technologies have the potential to both expose children to and protect them from television news footage likely to disturb or frighten. The advent of cheap, portable and widely available digital technology has vastly increased the possibility of violent news events being captured and potentially broadcast. This material has the potential to be particularly disturbing and harmful to young children. But on the flipside, available digital technology could be used to build in protection for young viewers especially when it comes to preserving scheduled television programming and guarding against violent content being broadcast during live crosses from known trouble spots. Based on interviews with news directors, parents and a review of published material two recommendations are put forward: 1. Digital television technology should be employed to prevent news events "overtaking" scheduled children's programming and to protect safe harbours placed in the classifications zones to protect children. 2. Broadcasters should regain control of the images that go to air during "live" feeds from obviously volatile situations by building in short delays in G classification zones.