89 resultados para queuing
Resumo:
现有区分服务网络的保证转发服务可提供稳定的带宽保证,但缺乏保证时延和分组丢失性能的有效方案.基于对RIO队列的稳态性能分析,提出两种自适应调整控制策略的主动队列管理算法(ARIO-D和ARIO-L).仿真结果表明,这两种算法在保持RIO算法带宽保证能力的同时,还可以提供稳定的和可区分的时延和分组丢失性能.采用ARIO-D和ARIO-L的保证转发服务可以为多媒体流量提供多种服务质量的定量保证. Current assured forwarding (AF) service in differentiated services (DiffServ) networks can provide stable guarantees in throughput, but is lacking of efficient schemes in ensuring queuing delay and loss ratio. By analyzing the steady state operating point of RIO, this paper proposes two active queue management algorithms with adaptive control policy, namely ARIO-D and ARIO-L. These two algorithms can provide differentiated performance in, respectively, queuing delay and loss ratio, in addition to throughput guarantee. By deploying ARIO-D and ARIO-L, AF service can provide quantitative guarantees for multimedia traffic with multiple QoS metrics.
Resumo:
2000 Mathematics Subject Classification: 60K25.
Resumo:
2000 Mathematics Subject Classification: 60J27.
Resumo:
The paper deals with a single server finite queuing system where the customers, who failed to get service, are temporarily blocked in the orbit of inactive customers. This model and its variants have many applications, especially for optimization of the corresponding models with retrials. We analyze the system in non-stationary regime and, using the discrete transformations method study, the busy period length and the number of successful calls made during it. ACM Computing Classification System (1998): G.3, J.7.
Resumo:
IEEE 802.11 standard is the dominant technology for wireless local area networks (WLANs). In the last two decades, the Distributed coordination function (DCF) of IEEE 802.11 standard has become the one of the most important media access control (MAC) protocols for mobile ad hoc networks (MANETs). The DCF protocol can also be combined with cognitive radio, thus the IEEE 802.11 cognitive radio ad hoc networks (CRAHNs) come into being. There were several literatures which focus on the modeling of IEEE 802.11 CRAHNs, however, there is still no thorough and scalable analytical models for IEEE 802.11 CRAHNs whose cognitive node (i.e., secondary user, SU) has spectrum sensing and possible channel silence process before the MAC contention process. This paper develops a unified analytical model for IEEE 802.11 CRAHNs for comprehensive MAC layer queuing analysis. In the proposed model, the SUs are modeled by a hyper generalized 2D Markov chain model with an M/G/1/K model while the primary users (PUs) are modeled by a generalized 2D Markov chain and an M/G/1/K model. The performance evaluation results show that the quality-of-service (QoS) of both the PUs and SUs can be statistically guaranteed with the suitable settings of duration of channel sensing and silence phase in the case of under loading.
Resumo:
Queuing is a key efficiency criterion in any service industry, including Healthcare. Almost all queue management studies are dedicated to improving an existing Appointment System. In developing countries such as Pakistan, there are no Appointment Systems for outpatients, resulting in excessive wait times. Additionally, excessive overloading, limited resources and cumbersome procedures lead to over-whelming queues. Despite numerous Healthcare applications, Data Envelopment Analysis (DEA) has not been applied for queue assessment. The current study aims to extend DEA modelling and demonstrate its usefulness by evaluating the queue system of a busy public hospital in a developing country, Pakistan, where all outpatients are walk-in; along with construction of a dynamic framework dedicated towards the implementation of the model. The inadequate allocation of doctors/personnel was observed as the most critical issue for long queues. Hence, the Queuing-DEA model has been developed such that it determines the ‘required’ number of doctors/personnel. The results indicated that given extensive wait times or length of queue, or both, led to high target values for doctors/personnel. Hence, this crucial information allows the administrators to ensure optimal staff utilization and controlling the queue pre-emptively, minimizing wait times. The dynamic framework constructed, specifically targets practical implementation of the Queuing-DEA model in resource-poor public hospitals of developing countries such as Pakistan; to continuously monitor rapidly changing queue situation and display latest required personnel. Consequently, the wait times of subsequent patients can be minimized, along with dynamic staff scheduling in the absence of appointments. This dynamic framework has been designed in Excel, requiring minimal training and work for users and automatic update features, with complex technical aspects running in the background. The proposed model and the dynamic framework has the potential to be applied in similar public hospitals, even in other developing countries, where appointment systems for outpatients are non-existent.
Resumo:
The development of 3G (the 3rd generation telecommunication) value-added services brings higher requirements of Quality of Service (QoS). Wideband Code Division Multiple Access (WCDMA) is one of three 3G standards, and enhancement of QoS for WCDMA Core Network (CN) becomes more and more important for users and carriers. The dissertation focuses on enhancement of QoS for WCDMA CN. The purpose is to realize the DiffServ (Differentiated Services) model of QoS for WCDMA CN. Based on the parallelism characteristic of Network Processors (NPs), the NP programming model is classified as Pool of Threads (POTs) and Hyper Task Chaining (HTC). In this study, an integrated programming model that combines both of the two models was designed. This model has highly efficient and flexible features, and also solves the problems of sharing conflicts and packet ordering. We used this model as the programming model to realize DiffServ QoS for WCDMA CN. ^ The realization mechanism of the DiffServ model mainly consists of buffer management, packet scheduling and packet classification algorithms based on NPs. First, we proposed an adaptive buffer management algorithm called Packet Adaptive Fair Dropping (PAFD), which takes into consideration of both fairness and throughput, and has smooth service curves. Then, an improved packet scheduling algorithm called Priority-based Weighted Fair Queuing (PWFQ) was introduced to ensure the fairness of packet scheduling and reduce queue time of data packets. At the same time, the delay and jitter are also maintained in a small range. Thirdly, a multi-dimensional packet classification algorithm called Classification Based on Network Processors (CBNPs) was designed. It effectively reduces the memory access and storage space, and provides less time and space complexity. ^ Lastly, an integrated hardware and software system of the DiffServ model of QoS for WCDMA CN was proposed. It was implemented on the NP IXP2400. According to the corresponding experiment results, the proposed system significantly enhanced QoS for WCDMA CN. It extensively improves consistent response time, display distortion and sound image synchronization, and thus increases network efficiency and saves network resource.^
Resumo:
Modern data centers host hundreds of thousands of servers to achieve economies of scale. Such a huge number of servers create challenges for the data center network (DCN) to provide proportionally large bandwidth. In addition, the deployment of virtual machines (VMs) in data centers raises the requirements for efficient resource allocation and find-grained resource sharing. Further, the large number of servers and switches in the data center consume significant amounts of energy. Even though servers become more energy efficient with various energy saving techniques, DCN still accounts for 20% to 50% of the energy consumed by the entire data center. The objective of this dissertation is to enhance DCN performance as well as its energy efficiency by conducting optimizations on both host and network sides. First, as the DCN demands huge bisection bandwidth to interconnect all the servers, we propose a parallel packet switch (PPS) architecture that directly processes variable length packets without segmentation-and-reassembly (SAR). The proposed PPS achieves large bandwidth by combining switching capacities of multiple fabrics, and it further improves the switch throughput by avoiding padding bits in SAR. Second, since certain resource demands of the VM are bursty and demonstrate stochastic nature, to satisfy both deterministic and stochastic demands in VM placement, we propose the Max-Min Multidimensional Stochastic Bin Packing (M3SBP) algorithm. M3SBP calculates an equivalent deterministic value for the stochastic demands, and maximizes the minimum resource utilization ratio of each server. Third, to provide necessary traffic isolation for VMs that share the same physical network adapter, we propose the Flow-level Bandwidth Provisioning (FBP) algorithm. By reducing the flow scheduling problem to multiple stages of packet queuing problems, FBP guarantees the provisioned bandwidth and delay performance for each flow. Finally, while DCNs are typically provisioned with full bisection bandwidth, DCN traffic demonstrates fluctuating patterns, we propose a joint host-network optimization scheme to enhance the energy efficiency of DCNs during off-peak traffic hours. The proposed scheme utilizes a unified representation method that converts the VM placement problem to a routing problem and employs depth-first and best-fit search to find efficient paths for flows.
Resumo:
The lack of analytical models that can accurately describe large-scale networked systems makes empirical experimentation indispensable for understanding complex behaviors. Research on network testbeds for testing network protocols and distributed services, including physical, emulated, and federated testbeds, has made steady progress. Although the success of these testbeds is undeniable, they fail to provide: 1) scalability, for handling large-scale networks with hundreds or thousands of hosts and routers organized in different scenarios, 2) flexibility, for testing new protocols or applications in diverse settings, and 3) inter-operability, for combining simulated and real network entities in experiments. This dissertation tackles these issues in three different dimensions. First, we present SVEET, a system that enables inter-operability between real and simulated hosts. In order to increase the scalability of networks under study, SVEET enables time-dilated synchronization between real hosts and the discrete-event simulator. Realistic TCP congestion control algorithms are implemented in the simulator to allow seamless interactions between real and simulated hosts. SVEET is validated via extensive experiments and its capabilities are assessed through case studies involving real applications. Second, we present PrimoGENI, a system that allows a distributed discrete-event simulator, running in real-time, to interact with real network entities in a federated environment. PrimoGENI greatly enhances the flexibility of network experiments, through which a great variety of network conditions can be reproduced to examine what-if questions. Furthermore, PrimoGENI performs resource management functions, on behalf of the user, for instantiating network experiments on shared infrastructures. Finally, to further increase the scalability of network testbeds to handle large-scale high-capacity networks, we present a novel symbiotic simulation approach. We present SymbioSim, a testbed for large-scale network experimentation where a high-performance simulation system closely cooperates with an emulation system in a mutually beneficial way. On the one hand, the simulation system benefits from incorporating the traffic metadata from real applications in the emulation system to reproduce the realistic traffic conditions. On the other hand, the emulation system benefits from receiving the continuous updates from the simulation system to calibrate the traffic between real applications. Specific techniques that support the symbiotic approach include: 1) a model downscaling scheme that can significantly reduce the complexity of the large-scale simulation model, resulting in an efficient emulation system for modulating the high-capacity network traffic between real applications; 2) a queuing network model for the downscaled emulation system to accurately represent the network effects of the simulated traffic; and 3) techniques for reducing the synchronization overhead between the simulation and emulation systems.
Resumo:
Toll plazas have several toll payment types such as manual, automatic coin machines, electronic and mixed lanes. In places with high traffic flow, the presence of toll plaza causes a lot of traffic congestion; this creates a bottleneck for the traffic flow, unless the correct mix of payment types is in operation. The objective of this research is to determine the optimal lane configuration for the mix of the methods of payment so that the waiting time in the queue at the toll plaza is minimized. A queuing model representing the toll plaza system and a nonlinear integer program have been developed to determine the optimal mix. The numerical results show that the waiting time can be decreased at the toll plaza by changing the lane configuration. For the case study developed an improvement in the waiting time as high as 96.37 percent was noticed during the morning peak hour.
Resumo:
Parallel processing is prevalent in many manufacturing and service systems. Many manufactured products are built and assembled from several components fabricated in parallel lines. An example of this manufacturing system configuration is observed at a manufacturing facility equipped to assemble and test web servers. Characteristics of a typical web server assembly line are: multiple products, job circulation, and paralleling processing. The primary objective of this research was to develop analytical approximations to predict performance measures of manufacturing systems with job failures and parallel processing. The analytical formulations extend previous queueing models used in assembly manufacturing systems in that they can handle serial and different configurations of paralleling processing with multiple product classes, and job circulation due to random part failures. In addition, appropriate correction terms via regression analysis were added to the approximations in order to minimize the gap in the error between the analytical approximation and the simulation models. Markovian and general type manufacturing systems, with multiple product classes, job circulation due to failures, and fork and join systems to model parallel processing were studied. In the Markovian and general case, the approximations without correction terms performed quite well for one and two product problem instances. However, it was observed that the flow time error increased as the number of products and net traffic intensity increased. Therefore, correction terms for single and fork-join stations were developed via regression analysis to deal with more than two products. The numerical comparisons showed that the approximations perform remarkably well when the corrections factors were used in the approximations. In general, the average flow time error was reduced from 38.19% to 5.59% in the Markovian case, and from 26.39% to 7.23% in the general case. All the equations stated in the analytical formulations were implemented as a set of Matlab scripts. By using this set, operations managers of web server assembly lines, manufacturing or other service systems with similar characteristics can estimate different system performance measures, and make judicious decisions - especially setting delivery due dates, capacity planning, and bottleneck mitigation, among others.
Resumo:
This study is about the comparison of simulation techniques between Discrete Event Simulation (DES) and Agent Based Simulation (ABS). DES is one of the best-known types of simulation techniques in Operational Research. Recently, there has been an emergence of another technique, namely ABS. One of the qualities of ABS is that it helps to gain a better understanding of complex systems that involve the interaction of people with their environment as it allows to model concepts like autonomy and pro-activeness which are important attributes to consider. Although there is a lot of literature relating to DES and ABS, we have found none that focuses on exploring the capability of both in tackling the human behaviour issues which relates to queuing time and customer satisfaction in the retail sector. Therefore, the objective of this study is to identify empirically the differences between these simulation techniques by stimulating the potential economic benefits of introducing new policies in a department store. To apply the new strategy, the behaviour of consumers in a retail store will be modelled using the DES and ABS approach and the results will be compared. We aim to understand which simulation technique is better suited to human behaviour modelling by investigating the capability of both techniques in predicting the best solution for an organisation in using management practices. Our main concern is to maximise customer satisfaction, for example by minimising their waiting times for the different services provided.
Resumo:
Modern data centers host hundreds of thousands of servers to achieve economies of scale. Such a huge number of servers create challenges for the data center network (DCN) to provide proportionally large bandwidth. In addition, the deployment of virtual machines (VMs) in data centers raises the requirements for efficient resource allocation and find-grained resource sharing. Further, the large number of servers and switches in the data center consume significant amounts of energy. Even though servers become more energy efficient with various energy saving techniques, DCN still accounts for 20% to 50% of the energy consumed by the entire data center. The objective of this dissertation is to enhance DCN performance as well as its energy efficiency by conducting optimizations on both host and network sides. First, as the DCN demands huge bisection bandwidth to interconnect all the servers, we propose a parallel packet switch (PPS) architecture that directly processes variable length packets without segmentation-and-reassembly (SAR). The proposed PPS achieves large bandwidth by combining switching capacities of multiple fabrics, and it further improves the switch throughput by avoiding padding bits in SAR. Second, since certain resource demands of the VM are bursty and demonstrate stochastic nature, to satisfy both deterministic and stochastic demands in VM placement, we propose the Max-Min Multidimensional Stochastic Bin Packing (M3SBP) algorithm. M3SBP calculates an equivalent deterministic value for the stochastic demands, and maximizes the minimum resource utilization ratio of each server. Third, to provide necessary traffic isolation for VMs that share the same physical network adapter, we propose the Flow-level Bandwidth Provisioning (FBP) algorithm. By reducing the flow scheduling problem to multiple stages of packet queuing problems, FBP guarantees the provisioned bandwidth and delay performance for each flow. Finally, while DCNs are typically provisioned with full bisection bandwidth, DCN traffic demonstrates fluctuating patterns, we propose a joint host-network optimization scheme to enhance the energy efficiency of DCNs during off-peak traffic hours. The proposed scheme utilizes a unified representation method that converts the VM placement problem to a routing problem and employs depth-first and best-fit search to find efficient paths for flows.
Resumo:
The lack of analytical models that can accurately describe large-scale networked systems makes empirical experimentation indispensable for understanding complex behaviors. Research on network testbeds for testing network protocols and distributed services, including physical, emulated, and federated testbeds, has made steady progress. Although the success of these testbeds is undeniable, they fail to provide: 1) scalability, for handling large-scale networks with hundreds or thousands of hosts and routers organized in different scenarios, 2) flexibility, for testing new protocols or applications in diverse settings, and 3) inter-operability, for combining simulated and real network entities in experiments. This dissertation tackles these issues in three different dimensions. First, we present SVEET, a system that enables inter-operability between real and simulated hosts. In order to increase the scalability of networks under study, SVEET enables time-dilated synchronization between real hosts and the discrete-event simulator. Realistic TCP congestion control algorithms are implemented in the simulator to allow seamless interactions between real and simulated hosts. SVEET is validated via extensive experiments and its capabilities are assessed through case studies involving real applications. Second, we present PrimoGENI, a system that allows a distributed discrete-event simulator, running in real-time, to interact with real network entities in a federated environment. PrimoGENI greatly enhances the flexibility of network experiments, through which a great variety of network conditions can be reproduced to examine what-if questions. Furthermore, PrimoGENI performs resource management functions, on behalf of the user, for instantiating network experiments on shared infrastructures. Finally, to further increase the scalability of network testbeds to handle large-scale high-capacity networks, we present a novel symbiotic simulation approach. We present SymbioSim, a testbed for large-scale network experimentation where a high-performance simulation system closely cooperates with an emulation system in a mutually beneficial way. On the one hand, the simulation system benefits from incorporating the traffic metadata from real applications in the emulation system to reproduce the realistic traffic conditions. On the other hand, the emulation system benefits from receiving the continuous updates from the simulation system to calibrate the traffic between real applications. Specific techniques that support the symbiotic approach include: 1) a model downscaling scheme that can significantly reduce the complexity of the large-scale simulation model, resulting in an efficient emulation system for modulating the high-capacity network traffic between real applications; 2) a queuing network model for the downscaled emulation system to accurately represent the network effects of the simulated traffic; and 3) techniques for reducing the synchronization overhead between the simulation and emulation systems.