825 resultados para network traffic analysis
Resumo:
Infrastructure management agencies are facing multiple challenges, including aging infrastructure, reduction in capacity of existing infrastructure, and availability of limited funds. Therefore, decision makers are required to think innovatively and develop inventive ways of using available funds. Maintenance investment decisions are generally made based on physical condition only. It is important to understand that spending money on public infrastructure is synonymous with spending money on people themselves. This also requires consideration of decision parameters, in addition to physical condition, such as strategic importance, socioeconomic contribution and infrastructure utilization. Consideration of multiple decision parameters for infrastructure maintenance investments can be beneficial in case of limited funding. Given this motivation, this dissertation presents a prototype decision support framework to evaluate trade-off, among competing infrastructures, that are candidates for infrastructure maintenance, repair and rehabilitation investments. Decision parameters' performances measured through various factors are combined to determine the integrated state of an infrastructure using Multi-Attribute Utility Theory (MAUT). The integrated state, cost and benefit estimates of probable maintenance actions are utilized alongside expert opinion to develop transition probability and reward matrices for each probable maintenance action for a particular candidate infrastructure. These matrices are then used as an input to the Markov Decision Process (MDP) for the finite-stage dynamic programming model to perform project (candidate)-level analysis to determine optimized maintenance strategies based on reward maximization. The outcomes of project (candidate)-level analysis are then utilized to perform network-level analysis taking the portfolio management approach to determine a suitable portfolio under budgetary constraints. The major decision support outcomes of the prototype framework include performance trend curves, decision logic maps, and a network-level maintenance investment plan for the upcoming years. The framework has been implemented with a set of bridges considered as a network with the assistance of the Pima County DOT, AZ. It is expected that the concept of this prototype framework can help infrastructure management agencies better manage their available funds for maintenance.
Resumo:
With the rapid growth of the Internet, computer attacks are increasing at a fast pace and can easily cause millions of dollar in damage to an organization. Detecting these attacks is an important issue of computer security. There are many types of attacks and they fall into four main categories, Denial of Service (DoS) attacks, Probe, User to Root (U2R) attacks, and Remote to Local (R2L) attacks. Within these categories, DoS and Probe attacks continuously show up with greater frequency in a short period of time when they attack systems. They are different from the normal traffic data and can be easily separated from normal activities. On the contrary, U2R and R2L attacks are embedded in the data portions of the packets and normally involve only a single connection. It becomes difficult to achieve satisfactory detection accuracy for detecting these two attacks. Therefore, we focus on studying the ambiguity problem between normal activities and U2R/R2L attacks. The goal is to build a detection system that can accurately and quickly detect these two attacks. In this dissertation, we design a two-phase intrusion detection approach. In the first phase, a correlation-based feature selection algorithm is proposed to advance the speed of detection. Features with poor prediction ability for the signatures of attacks and features inter-correlated with one or more other features are considered redundant. Such features are removed and only indispensable information about the original feature space remains. In the second phase, we develop an ensemble intrusion detection system to achieve accurate detection performance. The proposed method includes multiple feature selecting intrusion detectors and a data mining intrusion detector. The former ones consist of a set of detectors, and each of them uses a fuzzy clustering technique and belief theory to solve the ambiguity problem. The latter one applies data mining technique to automatically extract computer users’ normal behavior from training network traffic data. The final decision is a combination of the outputs of feature selecting and data mining detectors. The experimental results indicate that our ensemble approach not only significantly reduces the detection time but also effectively detect U2R and R2L attacks that contain degrees of ambiguous information.
Resumo:
Networked learning happens naturally within the social systems of which we are all part. However, in certain circumstances individuals may want to actively take initiative to initiate interaction with others they are not yet regularly in exchange with. This may be the case when external influences and societal changes require innovation of existing practices. This paper proposes a framework with relevant dimensions providing insight into precipitated characteristics of designed as well as ‘fostered or grown’ networked learning initiatives. Networked learning initiatives are characterized as “goal-directed, interest-, or needs based activities of a group of (at least three) individuals that initiate interaction across the boundaries of their regular social systems”. The proposed framework is based on two existing research traditions, namely 'networked learning' and 'learning networks', comparing, integrating and building upon knowledge from both perspectives. We uncover some interesting differences between definitions, but also similarities in the way they describe what ‘networked’ means and how learning is conceptualized. We think it is productive to combine both research perspectives, since they both study the process of learning in networks extensively, albeit from different points of view, and their combination can provide valuable insights in networked learning initiatives. We uncover important features of networked learning initiatives, characterize actors and connections of which they are comprised and conditions which facilitate and support them. The resulting framework could be used both for analytic purposes and (partly) as a design framework. In this framework it is acknowledged that not all successful networks have the same characteristics: there is no standard ‘constellation’ of people, roles, rules, tools and artefacts, although there are indications that some network structures work better than others. Interactions of individuals can only be designed and fostered till a certain degree: the type of network and its ‘growth’ (e.g. in terms of the quantity of people involved, or the quality and relevance of co-created concepts, ideas, artefacts and solutions to its ‘inhabitants’) is in the hand of the people involved. Therefore, the framework consists of dimensions on a sliding scale. It introduces a structured and analytic way to look at the precipitation of networked learning initiatives: learning networks. Successive research on the application of this framework and feedback from the networked learning community is needed to further validate it’s usability and value to both research as well as practice.
Resumo:
As part of its single technology appraisal (STA) process, the National Institute for Health and Care Excellence (NICE) invited the company that manufactures cabazitaxel (Jevtana(®), Sanofi, UK) to submit evidence for the clinical and cost effectiveness of cabazitaxel for treatment of patients with metastatic hormone-relapsed prostate cancer (mHRPC) previously treated with a docetaxel-containing regimen. The School of Health and Related Research Technology Appraisal Group at the University of Sheffield was commissioned to act as the independent Evidence Review Group (ERG). The ERG produced a critical review of the evidence for the clinical and cost effectiveness of the technology based upon the company's submission to NICE. Clinical evidence for cabazitaxel was derived from a multinational randomised open-label phase III trial (TROPIC) of cabazitaxel plus prednisone or prednisolone compared with mitoxantrone plus prednisone or prednisolone, which was assumed to represent best supportive care. The NICE final scope identified a further three comparators: abiraterone in combination with prednisone or prednisolone; enzalutamide; and radium-223 dichloride for the subgroup of people with bone metastasis only (no visceral metastasis). The company did not consider radium-223 dichloride to be a relevant comparator. Neither abiraterone nor enzalutamide has been directly compared in a trial with cabazitaxel. Instead, clinical evidence was synthesised within a network meta-analysis (NMA). Results from TROPIC showed that cabazitaxel was associated with a statistically significant improvement in both overall survival and progression-free survival compared with mitoxantrone. Results from a random-effects NMA, as conducted by the company and updated by the ERG, indicated that there was no statistically significant difference between the three active treatments for both overall survival and progression-free survival. Utility data were not collected as part of the TROPIC trial, and were instead taken from the company's UK early access programme. Evidence on resource use came from the TROPIC trial, supplemented by both expert clinical opinion and a UK clinical audit. List prices were used for mitoxantrone, abiraterone and enzalutamide as directed by NICE, although commercial in-confidence patient-access schemes (PASs) are in place for abiraterone and enzalutamide. The confidential PAS was used for cabazitaxel. Sequential use of the advanced hormonal therapies (abiraterone and enzalutamide) does not usually occur in clinical practice in the UK. Hence, cabazitaxel could be used within two pathways of care: either when an advanced hormonal therapy was used pre-docetaxel, or when one was used post-docetaxel. The company believed that the former pathway was more likely to represent standard National Health Service (NHS) practice, and so their main comparison was between cabazitaxel and mitoxantrone, with effectiveness data from the TROPIC trial. Results of the company's updated cost-effectiveness analysis estimated a probabilistic incremental cost-effectiveness ratio (ICER) of £45,982 per quality-adjusted life-year (QALY) gained, which the committee considered to be the most plausible value for this comparison. Cabazitaxel was estimated to be both cheaper and more effective than abiraterone. Cabazitaxel was estimated to be cheaper but less effective than enzalutamide, resulting in an ICER of £212,038 per QALY gained for enzalutamide compared with cabazitaxel. The ERG noted that radium-223 is a valid comparator (for the indicated sub-group), and that it may be used in either of the two care pathways. Hence, its exclusion leads to uncertainty in the cost-effectiveness results. In addition, the company assumed that there would be no drug wastage when cabazitaxel was used, with cost-effectiveness results being sensitive to this assumption: modelling drug wastage increased the ICER comparing cabazitaxel with mitoxantrone to over £55,000 per QALY gained. The ERG updated the company's NMA and used a random effects model to perform a fully incremental analysis between cabazitaxel, abiraterone, enzalutamide and best supportive care using PASs for abiraterone and enzalutamide. Results showed that both cabazitaxel and abiraterone were extendedly dominated by the combination of best supportive care and enzalutamide. Preliminary guidance from the committee, which included wastage of cabazitaxel, did not recommend its use. In response, the company provided both a further discount to the confidential PAS for cabazitaxel and confirmation from NHS England that it is appropriate to supply and purchase cabazitaxel in pre-prepared intravenous-infusion bags, which would remove the cost of drug wastage. As a result, the committee recommended use of cabazitaxel as a treatment option in people with an Eastern Cooperative Oncology Group performance status of 0 or 1 whose disease had progressed during or after treatment with at least 225 mg/m(2) of docetaxel, as long as it was provided at the discount agreed in the PAS and purchased in either pre-prepared intravenous-infusion bags or in vials at a reduced price to reflect the average per-patient drug wastage.
Resumo:
In this research work, a new routing protocol for Opportunistic Networks is presented. The proposed protocol is called PSONET (PSO for Opportunistic Networks) since the proposal uses a hybrid system composed of a Particle Swarm Optimization algorithm (PSO). The main motivation for using the PSO is to take advantage of its search based on individuals and their learning adaptation. The PSONET uses the Particle Swarm Optimization technique to drive the network traffic through of a good subset of forwarders messages. The PSONET analyzes network communication conditions, detecting whether each node has sparse or dense connections and thus make better decisions about routing messages. The PSONET protocol is compared with the Epidemic and PROPHET protocols in three different scenarios of mobility: a mobility model based in activities, which simulates the everyday life of people in their work activities, leisure and rest; a mobility model based on a community of people, which simulates a group of people in their communities, which eventually will contact other people who may or may not be part of your community, to exchange information; and a random mobility pattern, which simulates a scenario divided into communities where people choose a destination at random, and based on the restriction map, move to this destination using the shortest path. The simulation results, obtained through The ONE simulator, show that in scenarios where the mobility model based on a community of people and also where the mobility model is random, the PSONET protocol achieves a higher messages delivery rate and a lower replication messages compared with the Epidemic and PROPHET protocols.
Resumo:
In this dissertation, I study three problems in market design: the allocation of resources to schools using deferred acceptance algorithms, the demand reduction of employees on centralized labor markets, and the alleviation of traffic congestion. I show how institutional and behavioral considerations specific to each problem can alleviate several practical limitations faced by current solutions. For the case of traffic congestion, I show experimentally that the proposed solution is effective. In Chapter 1, I investigate how school districts could assign resources to schools when it is desirable to provide stable assignments. An assignment is stable if there is no student currently assigned to a school that would prefer to be assigned to a different school that would admit him if it had the resources. Current assignment algorithms assume resources are fixed. I show how simple modifications to these algorithms produce stable allocations of resources and students to schools. In Chapter 2, I show how the negotiation of salaries within centralized labor markets using deferred acceptance algorithms eliminates the incentives of the hiring firms to strategically reduce their demand. It is well-known that it is impossible to eliminate these incentives for the hiring firms in markets without negotiation of salaries. Chapter 3 investigates how to achieve an efficient distribution of traffic congestion on a road network. Traffic congestion is the product of an externality: drivers do not consider the cost they impose on other drivers by entering a road. In theory, Pigouvian prices would solve the problem. In practice, however, these prices face two important limitations: i) the information required to calculate these prices is unavailable to policy makers and ii) these prices would effectively be new taxes that would transfer resources from the public to the government. I show how to construct congestion prices that retrieve the required information from the drivers and do not transfer resources to the government. I circumvent the limitations of Pigouvian prices by assuming that individuals make some mistakes when selecting routes and have a tendency towards truth-telling. Both assumptions are very robust observations in experimental economics.
Resumo:
The proliferation of new mobile communication devices, such as smartphones and tablets, has led to an exponential growth in network traffic. The demand for supporting the fast-growing consumer data rates urges the wireless service providers and researchers to seek a new efficient radio access technology, which is the so-called 5G technology, beyond what current 4G LTE can provide. On the other hand, ubiquitous RFID tags, sensors, actuators, mobile phones and etc. cut across many areas of modern-day living, which offers the ability to measure, infer and understand the environmental indicators. The proliferation of these devices creates the term of the Internet of Things (IoT). For the researchers and engineers in the field of wireless communication, the exploration of new effective techniques to support 5G communication and the IoT becomes an urgent task, which not only leads to fruitful research but also enhance the quality of our everyday life. Massive MIMO, which has shown the great potential in improving the achievable rate with a very large number of antennas, has become a popular candidate. However, the requirement of deploying a large number of antennas at the base station may not be feasible in indoor scenarios. Does there exist a good alternative that can achieve similar system performance to massive MIMO for indoor environment? In this dissertation, we address this question by proposing the time-reversal technique as a counterpart of massive MIMO in indoor scenario with the massive multipath effect. It is well known that radio signals will experience many multipaths due to the reflection from various scatters, especially in indoor environments. The traditional TR waveform is able to create a focusing effect at the intended receiver with very low transmitter complexity in a severe multipath channel. TR's focusing effect is in essence a spatial-temporal resonance effect that brings all the multipaths to arrive at a particular location at a specific moment. We show that by using time-reversal signal processing, with a sufficiently large bandwidth, one can harvest the massive multipaths naturally existing in a rich-scattering environment to form a large number of virtual antennas and achieve the desired massive multipath effect with a single antenna. Further, we explore the optimal bandwidth for TR system to achieve maximal spectral efficiency. Through evaluating the spectral efficiency, the optimal bandwidth for TR system is found determined by the system parameters, e.g., the number of users and backoff factor, instead of the waveform types. Moreover, we investigate the tradeoff between complexity and performance through establishing a generalized relationship between the system performance and waveform quantization in a practical communication system. It is shown that a 4-bit quantized waveforms can be used to achieve the similar bit-error-rate compared to the TR system with perfect precision waveforms. Besides 5G technology, Internet of Things (IoT) is another terminology that recently attracts more and more attention from both academia and industry. In the second part of this dissertation, the heterogeneity issue within the IoT is explored. One of the significant heterogeneity considering the massive amount of devices in the IoT is the device heterogeneity, i.e., the heterogeneous bandwidths and associated radio-frequency (RF) components. The traditional middleware techniques result in the fragmentation of the whole network, hampering the objects interoperability and slowing down the development of a unified reference model for the IoT. We propose a novel TR-based heterogeneous system, which can address the bandwidth heterogeneity and maintain the benefit of TR at the same time. The increase of complexity in the proposed system lies in the digital processing at the access point (AP), instead of at the devices' ends, which can be easily handled with more powerful digital signal processor (DSP). Meanwhile, the complexity of the terminal devices stays low and therefore satisfies the low-complexity and scalability requirement of the IoT. Since there is no middleware in the proposed scheme and the additional physical layer complexity concentrates on the AP side, the proposed heterogeneous TR system better satisfies the low-complexity and energy-efficiency requirement for the terminal devices (TDs) compared with the middleware approach.
Resumo:
A dissertação tem como foco a pesquisa sobre a configuração urbana do estudo de caso, a Unidade Territorial de Vale do Neiva em Viana do Castelo. É um território que contem potencial locativo capaz de promover internamente a fixação industrial, as actividades terciárias e a capacidade em fixar e atrair população. Por outro lado, apresenta dinâmicas e valências económicas em contexto concelhio e regional. A análise configuracional abordada na investigação, através do recurso a técnicas e métodos da Sintaxe Espacial, apura características morfológicas do território permitindo a melhor compreensão do funcionamento e da relação entre forma urbana e relações sociais que a envolvem. Foi abordada a dimensão económica-espacial, nomeadamente, no que diz respeito aos seus componentes e interdependências, visando compreender como se processa a apropriação espacial na malha urbana. A correlação desta abordagem com a metodologia da Sintaxe Espacial possibilitou aumentar conhecimento sobre níveis de acessibilidade das actividade presentes no território. Permitem, adicionalmente, a obtenção de análises mais estruturadas que poderão apoiar decisões tecnicamente mais robustas. Por fim, foi diagnosticado o impacto de algumas acções previstas no Plano Director Municipal de Viana do Castelo, designadamente, as que recaem sobre a rede viária da Unidade Territorial do Vale do Neiva. A análise assentou na simulação e previsão dos efeitos das transformações sobre a configuração urbana. Informa, fundamentadamente, sobre estratégias de planeamento e gestão urbana previstas.
Resumo:
The lack of analytical models that can accurately describe large-scale networked systems makes empirical experimentation indispensable for understanding complex behaviors. Research on network testbeds for testing network protocols and distributed services, including physical, emulated, and federated testbeds, has made steady progress. Although the success of these testbeds is undeniable, they fail to provide: 1) scalability, for handling large-scale networks with hundreds or thousands of hosts and routers organized in different scenarios, 2) flexibility, for testing new protocols or applications in diverse settings, and 3) inter-operability, for combining simulated and real network entities in experiments. This dissertation tackles these issues in three different dimensions. First, we present SVEET, a system that enables inter-operability between real and simulated hosts. In order to increase the scalability of networks under study, SVEET enables time-dilated synchronization between real hosts and the discrete-event simulator. Realistic TCP congestion control algorithms are implemented in the simulator to allow seamless interactions between real and simulated hosts. SVEET is validated via extensive experiments and its capabilities are assessed through case studies involving real applications. Second, we present PrimoGENI, a system that allows a distributed discrete-event simulator, running in real-time, to interact with real network entities in a federated environment. PrimoGENI greatly enhances the flexibility of network experiments, through which a great variety of network conditions can be reproduced to examine what-if questions. Furthermore, PrimoGENI performs resource management functions, on behalf of the user, for instantiating network experiments on shared infrastructures. Finally, to further increase the scalability of network testbeds to handle large-scale high-capacity networks, we present a novel symbiotic simulation approach. We present SymbioSim, a testbed for large-scale network experimentation where a high-performance simulation system closely cooperates with an emulation system in a mutually beneficial way. On the one hand, the simulation system benefits from incorporating the traffic metadata from real applications in the emulation system to reproduce the realistic traffic conditions. On the other hand, the emulation system benefits from receiving the continuous updates from the simulation system to calibrate the traffic between real applications. Specific techniques that support the symbiotic approach include: 1) a model downscaling scheme that can significantly reduce the complexity of the large-scale simulation model, resulting in an efficient emulation system for modulating the high-capacity network traffic between real applications; 2) a queuing network model for the downscaled emulation system to accurately represent the network effects of the simulated traffic; and 3) techniques for reducing the synchronization overhead between the simulation and emulation systems.
Resumo:
INTRODUCCION Dado que la artritis reumatoide es la artropatía inflamatoria más frecuente en el mundo, siendo altamente discapacitante y causando gran impacto de alto costo, se busca ofrecer al paciente opciones terapéuticas y calidad de vida a través del establecimiento de un tratamiento oportuno y eficaz, teniendo presentes aquellos predictores de respuesta previo a instaurar determinada terapia. Existen pocos estudios que permitan establecer aquellos factores de adecuada respuesta para inicio de terapia biológica con abatacept, por lo cual en este estudio se busca determinar cuáles son esos posibles factores. METODOLOGIA Estudio analítico de tipo corte transversal de 94 pacientes con diagnóstico de AR, evaluados para determinar las posibles variables que influyen en la respuesta a terapia biológica con abatacept. Se incluyeron 67 de los 94 pacientes al modelo de regresión logística, que son aquellos pacientes en que fue posible medir la respuesta al tratamiento (respuesta EULAR) a través de la determinación del DAS 28 y así discriminar en dos grupos de comparación (respuesta y no respuesta). DISCUSION DE RESULTADOS La presencia de alta actividad de la enfermedad al inicio de la terapia biológica, aumenta la probabilidad de respuesta al tratamiento respecto al grupo con baja/moderada actividad de la enfermedad; OR 4,19 - IC 95%(1,18 – 14.9), (p 0,027). La ausencia de erosiones óseas aumenta la probabilidad de presentar adecuada respuesta a la terapia biológica respecto aquellos con erosiones, con un OR 3,1 (1,01-9,55), (p 0,048). Niveles de VSG y presencia de manifestaciones extra-articulares son otros datos de interés encontrados en el análisis bivariado. Respecto a las variables o características como predictores de respuesta al tratamiento con abatacept, se encuentran estudios que corroboran los hallazgos de este estudio, respecto al alto puntaje del DAS 28 al inicio de la terapia (9, 12). CONCLUSIONES Existen distintas variables que determinan la respuesta a los diferentes biológicos para manejo de AR. Es imprescindible evaluar dichos factores de manera individual con el fin de lograr de manera efectiva el control de la enfermedad y así mejorar la calidad de vida del individuo (medicina personalizada). Existen variables tales como la alta actividad de la enfermedad y la ausencia de erosiones como predictores de respuesta en la terapia con abatacept.
Resumo:
A replicação de base de dados tem como objectivo a cópia de dados entre bases de dados distribuídas numa rede de computadores. A replicação de dados é importante em várias situações, desde a realização de cópias de segurança da informação, ao balanceamento de carga, à distribuição da informação por vários locais, até à integração de sistemas heterogéneos. A replicação possibilita uma diminuição do tráfego de rede, pois os dados ficam disponíveis localmente possibilitando também o seu acesso no caso de indisponibilidade da rede. Esta dissertação baseia-se na realização de um trabalho que consistiu no desenvolvimento de uma aplicação genérica para a replicação de bases de dados a disponibilizar como open source software. A aplicação desenvolvida possibilita a integração de dados entre vários sistemas, com foco na integração de dados heterogéneos, na fragmentação de dados e também na possibilidade de adaptação a várias situações. ABSTRACT: Data replication is a mechanism to synchronize and integrate data between distributed databases over a computer network. Data replication is an important tool in several situations, such as the creation of backup systems, load balancing between various nodes, distribution of information between various locations, integration of heterogeneous systems. Replication enables a reduction in network traffic, because data remains available locally even in the event of a temporary network failure. This thesis is based on the work carried out to develop an application for database replication to be made accessible as open source software. The application that was built allows for data integration between various systems, with particular focus on, amongst others, the integration of heterogeneous data, the fragmentation of data, replication in cascade, data format changes between replicas, master/slave and multi master synchronization.
Resumo:
The research team recognized the value of network-level Falling Weight Deflectometer (FWD) testing to evaluate the structural condition trends of flexible pavements. However, practical limitations due to the cost of testing, traffic control and safety concerns and the ability to test a large network may discourage some agencies from conducting the network-level FWD testing. For this reason, the surrogate measure of the Structural Condition Index (SCI) is suggested for use. The main purpose of the research presented in this paper is to investigate data mining strategies and to develop a prediction method of the structural condition trends for network-level applications which does not require FWD testing. The research team first evaluated the existing and historical pavement condition, distress, ride, traffic and other data attributes in the Texas Department of Transportation (TxDOT) Pavement Maintenance Information System (PMIS), applied data mining strategies to the data, discovered useful patterns and knowledge for SCI value prediction, and finally provided a reasonable measure of pavement structural condition which is correlated to the SCI. To evaluate the performance of the developed prediction approach, a case study was conducted using the SCI data calculated from the FWD data collected on flexible pavements over a 5-year period (2005 – 09) from 354 PMIS sections representing 37 pavement sections on the Texas highway system. The preliminary study results showed that the proposed approach can be used as a supportive pavement structural index in the event when FWD deflection data is not available and help pavement managers identify the timing and appropriate treatment level of preventive maintenance activities.
Resumo:
Wireless access is expected to play a crucial role in the future of the Internet. The demands of the wireless environment are not always compatible with the assumptions that were made on the era of the wired links. At the same time, new services that take advantage of the advances in many areas of technology are invented. These services include delivery of mass media like television and radio, Internet phone calls, and video conferencing. The network must be able to deliver these services with acceptable performance and quality to the end user. This thesis presents an experimental study to measure the performance of bulk data TCP transfers, streaming audio flows, and HTTP transfers which compete the limited bandwidth of the GPRS/UMTS-like wireless link. The wireless link characteristics are modeled with a wireless network emulator. We analyze how different competing workload types behave with regular TPC and how the active queue management, the Differentiated services (DiffServ), and a combination of TCP enhancements affect the performance and the quality of service. We test on four link types including an error-free link and the links with different Automatic Repeat reQuest (ARQ) persistency. The analysis consists of comparing the resulting performance in different configurations based on defined metrics. We observed that DiffServ and Random Early Detection (RED) with Explicit Congestion Notification (ECN) are useful, and in some conditions necessary, for quality of service and fairness because a long queuing delay and congestion related packet losses cause problems without DiffServ and RED. However, we observed situations, where there is still room for significant improvements if the link-level is aware of the quality of service. Only very error-prone link diminishes the benefits to nil. The combination of TCP enhancements improves performance. These include initial window of four, Control Block Interdependence (CBI) and Forward RTO recovery (F-RTO). The initial window of four helps a later starting TCP flow to start faster but generates congestion under some conditions. CBI prevents slow-start overshoot and balances slow start in the presence of error drops, and F-RTO reduces unnecessary retransmissions successfully.
Resumo:
We study the performance of cognitive (secondary) users in a cognitive radio network which uses a channel whenever the primary users are not using the channel. The usage of the channel by the primary users is modelled by an ON-OFF renewal process. The cognitive users may be transmitting data using TCP connections and voice traffic. The voice traffic is given priority over the data traffic. We theoretically compute the mean delay of TCP and voice packets and also the mean throughput of the different TCP connections. We compare the theoretical results with simulations.