825 resultados para network traffic analysis
Resumo:
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)
Resumo:
The web services (WS) technology provides a comprehensive solution for representing, discovering, and invoking services in a wide variety of environments, including Service Oriented Architectures (SOA) and grid computing systems. At the core of WS technology lie a number of XML-based standards, such as the Simple Object Access Protocol (SOAP), that have successfully ensured WS extensibility, transparency, and interoperability. Nonetheless, there is an increasing demand to enhance WS performance, which is severely impaired by XML's verbosity. SOAP communications produce considerable network traffic, making them unfit for distributed, loosely coupled, and heterogeneous computing environments such as the open Internet. Also, they introduce higher latency and processing delays than other technologies, like Java RMI and CORBA. WS research has recently focused on SOAP performance enhancement. Many approaches build on the observation that SOAP message exchange usually involves highly similar messages (those created by the same implementation usually have the same structure, and those sent from a server to multiple clients tend to show similarities in structure and content). Similarity evaluation and differential encoding have thus emerged as SOAP performance enhancement techniques. The main idea is to identify the common parts of SOAP messages, to be processed only once, avoiding a large amount of overhead. Other approaches investigate nontraditional processor architectures, including micro-and macrolevel parallel processing solutions, so as to further increase the processing rates of SOAP/XML software toolkits. This survey paper provides a concise, yet comprehensive review of the research efforts aimed at SOAP performance enhancement. A unified view of the problem is provided, covering almost every phase of SOAP processing, ranging over message parsing, serialization, deserialization, compression, multicasting, security evaluation, and data/instruction-level processing.
Resumo:
In piattaforme di Stream Processing è spesso necessario eseguire elaborazioni differenziate degli stream di input. Questa tesi ha l'obiettivo di realizzare uno scheduler in grado di attribuire priorità di esecuzione differenti agli operatori deputati all'elaborazione degli stream.
Resumo:
Background: Recently, Cipriani and colleagues examined the relative efficacy of 12 new-generation antidepressants on major depression using network meta-analytic methods. They found that some of these medications outperformed others in patient response to treatment. However, several methodological criticisms have been raised about network meta-analysis and Cipriani’s analysis in particular which creates the concern that the stated superiority of some antidepressants relative to others may be unwarranted. Materials and Methods: A Monte Carlo simulation was conducted which involved replicating Cipriani’s network metaanalysis under the null hypothesis (i.e., no true differences between antidepressants). The following simulation strategy was implemented: (1) 1000 simulations were generated under the null hypothesis (i.e., under the assumption that there were no differences among the 12 antidepressants), (2) each of the 1000 simulations were network meta-analyzed, and (3) the total number of false positive results from the network meta-analyses were calculated. Findings: Greater than 7 times out of 10, the network meta-analysis resulted in one or more comparisons that indicated the superiority of at least one antidepressant when no such true differences among them existed. Interpretation: Based on our simulation study, the results indicated that under identical conditions to those of the 117 RCTs with 236 treatment arms contained in Cipriani et al.’s meta-analysis, one or more false claims about the relative efficacy of antidepressants will be made over 70% of the time. As others have shown as well, there is little evidence in these trials that any antidepressant is more effective than another. The tendency of network meta-analyses to generate false positive results should be considered when conducting multiple comparison analyses.
Resumo:
Patient self-management (PSM) of oral anticoagulation is under discussion, because evidence from real-life settings is missing. Using data from a nationwide, prospective cohort study in Switzerland, we assessed overall long-term efficacy and safety of PSM and examined subgroups. Data of 1140 patients (5818.9 patient-years) were analysed and no patient were lost to follow-up. Median follow-up was 4.3 years (range 0.2-12.8 years). Median age at the time of training was 54.2 years (range 18.2-85.2) and 34.6% were women. All-cause mortality was 1.4 per 100 patient-years (95% CI 1.1-1.7) with a higher rate in patients with atrial fibrillation (2.5; 1.6-3.7; p<0.001), patients>50 years of age (2.0; 1.6-2.6; p<0.001), and men (1.6; 1.2-2.1; p = 0.036). The rate of thromboembolic events was 0.4 (0.2-0.6) and independent from indications, sex and age. Major bleeding were observed in 1.1 (0.9-1.5) per 100 patient-years. Efficacy was comparable to standard care and new oral anticoagulants in a network meta-analysis. PSM of properly trained patients is effective and safe in a long-term real-life setting and robust across clinical subgroups. Adoption in various clinical settings, including those with limited access to medical care or rural areas is warranted.
Resumo:
We read with great interest the large-scale network meta-analysis by Kowalewski et al. comparing clinical outcomes of patients undergoing coronary artery bypass grafting (CABG) operated on using minimal invasive extracorporeal circulation (MiECC) or off-pump (OPCAB) with those undergoing surgery on conventional cardiopulmonary bypass (CPB) [1]. The authors actually integrated into single study two recently published meta-analysis comparing MiECC and OPCAB with conventional CPB, respectively [2, 3] into a single study. According to the results of this study, MiECC and OPCAB are both strongly associated with improved perioperative outcomes following CABG when compared with CABG performed on conventional CPB. The authors conclude that MiECC may represent an attractive compromise between OPCAB and conventional CPB. After carefully reading the whole manuscript, it becomes evident that the role of MiECC is clearly undervalued. Detailed statistical analysis using the surface under the cumulative ranking probabilities indicated that MiECC represented the safer and more effective intervention regarding all-cause mortality and protection from myocardial infarction, cerebral stroke, postoperative atrial fibrillation and renal dysfunction when compared with OPCAB. Even though no significant statistical differences were demonstrated between MiECC and OPCAB, the superiority of MiECC is obvious by the hierarchy of treatments in the probability analysis, which ranked MiECC as the first treatment followed by OPCAB and conventional CPB. Thus, MiECC does not represent a compromise between OPCAB and conventional CPB, but an attractive dominant technique in CABG surgery. These results are consistent with the largest published meta-analysis by Anastasiadis et al. comparing MiECC versus conventional CPB including a total of 2770 patients. A significant decrease in mortality was observed when MiECC was used, which was also associated with reduced risk of postoperative myocardial infarction and neurological events [4]. Similarly, another recent meta-analysis by Benedetto et al. compared MiECC versus OPCAB and resulted in comparable outcomes between these two surgical techniques [5]. As stated in the text, superiority of MiECC observed in the current network meta-analysis, when compared with OPCAB, could be attributed to the fact that MiECC offers the potential for complete revascularization, whereas OPCAB poses a challenge for unexperienced surgeons; especially when distal marginal branches on the lateral and/or posterior wall of the heart need revascularization. This is reflected by a significantly lower number of distal anastomoses performed in OPCAB when compared with conventional CPB. Therefore, taking into consideration the literature published up to date, including the results of the current article, we advocate that MiECC should be integrated in the clinical practice guidelines as a state-of-the-art technique and become a standard practice for perfusion in coronary revascularization surgery.
Resumo:
BACKGROUND The aim of this study was to identify clinical variables that may predict the need for adjuvant radiotherapy after neoadjuvant chemotherapy (NACT) and radical surgery in locally advanced cervical cancer patients. METHODS A retrospective series of cervical cancer patients with International Federation of Gynecology and Obstetrics (FIGO) stages IB2-IIB treated with NACT followed by radical surgery was analyzed. Clinical predictors of persistence of intermediate- and/or high-risk factors at final pathological analysis were investigated. Statistical analysis was performed using univariate and multivariate analysis and using a model based on artificial intelligence known as artificial neuronal network (ANN) analysis. RESULTS Overall, 101 patients were available for the analyses. Fifty-two (51 %) patients were considered at high risk secondary to parametrial, resection margin and/or lymph node involvement. When disease was confined to the cervix, four (4 %) patients were considered at intermediate risk. At univariate analysis, FIGO grade 3, stage IIB disease at diagnosis and the presence of enlarged nodes before NACT predicted the presence of intermediate- and/or high-risk factors at final pathological analysis. At multivariate analysis, only FIGO grade 3 and tumor diameter maintained statistical significance. The specificity of ANN models in evaluating predictive variables was slightly superior to conventional multivariable models. CONCLUSIONS FIGO grade, stage, tumor diameter, and histology are associated with persistence of pathological intermediate- and/or high-risk factors after NACT and radical surgery. This information is useful in counseling patients at the time of treatment planning with regard to the probability of being subjected to pelvic radiotherapy after completion of the initially planned treatment.
Resumo:
Information-centric networking (ICN) is a new communication paradigm that has been proposed to cope with drawbacks of host-based communication protocols, namely scalability and security. In this thesis, we base our work on Named Data Networking (NDN), which is a popular ICN architecture, and investigate NDN in the context of wireless and mobile ad hoc networks. In a first part, we focus on NDN efficiency (and potential improvements) in wireless environments by investigating NDN in wireless one-hop communication, i.e., without any routing protocols. A basic requirement to initiate informationcentric communication is the knowledge of existing and available content names. Therefore, we develop three opportunistic content discovery algorithms and evaluate them in diverse scenarios for different node densities and content distributions. After content names are known, requesters can retrieve content opportunistically from any neighbor node that provides the content. However, in case of short contact times to content sources, content retrieval may be disrupted. Therefore, we develop a requester application that keeps meta information of disrupted content retrievals and enables resume operations when a new content source has been found. Besides message efficiency, we also evaluate power consumption of information-centric broadcast and unicast communication. Based on our findings, we develop two mechanisms to increase efficiency of information-centric wireless one-hop communication. The first approach called Dynamic Unicast (DU) avoids broadcast communication whenever possible since broadcast transmissions result in more duplicate Data transmissions, lower data rates and higher energy consumption on mobile nodes, which are not interested in overheard Data, compared to unicast communication. Hence, DU uses broadcast communication only until a content source has been found and then retrieves content directly via unicast from the same source. The second approach called RC-NDN targets efficiency of wireless broadcast communication by reducing the number of duplicate Data transmissions. In particular, RC-NDN is a Data encoding scheme for content sources that increases diversity in wireless broadcast transmissions such that multiple concurrent requesters can profit from each others’ (overheard) message transmissions. If requesters and content sources are not in one-hop distance to each other, requests need to be forwarded via multi-hop routing. Therefore, in a second part of this thesis, we investigate information-centric wireless multi-hop communication. First, we consider multi-hop broadcast communication in the context of rather static community networks. We introduce the concept of preferred forwarders, which relay Interest messages slightly faster than non-preferred forwarders to reduce redundant duplicate message transmissions. While this approach works well in static networks, the performance may degrade in mobile networks if preferred forwarders may regularly move away. Thus, to enable routing in mobile ad hoc networks, we extend DU for multi-hop communication. Compared to one-hop communication, multi-hop DU requires efficient path update mechanisms (since multi-hop paths may expire quickly) and new forwarding strategies to maintain NDN benefits (request aggregation and caching) such that only a few messages need to be transmitted over the entire end-to-end path even in case of multiple concurrent requesters. To perform quick retransmission in case of collisions or other transmission errors, we implement and evaluate retransmission timers from related work and compare them to CCNTimer, which is a new algorithm that enables shorter content retrieval times in information-centric wireless multi-hop communication. Yet, in case of intermittent connectivity between requesters and content sources, multi-hop routing protocols may not work because they require continuous end-to-end paths. Therefore, we present agent-based content retrieval (ACR) for delay-tolerant networks. In ACR, requester nodes can delegate content retrieval to mobile agent nodes, which move closer to content sources, can retrieve content and return it to requesters. Thus, ACR exploits the mobility of agent nodes to retrieve content from remote locations. To enable delay-tolerant communication via agents, retrieved content needs to be stored persistently such that requesters can verify its authenticity via original publisher signatures. To achieve this, we develop a persistent caching concept that maintains received popular content in repositories and deletes unpopular content if free space is required. Since our persistent caching concept can complement regular short-term caching in the content store, it can also be used for network caching to store popular delay-tolerant content at edge routers (to reduce network traffic and improve network performance) while real-time traffic can still be maintained and served from the content store.
Resumo:
Mechanisms that allow pathogens to colonize the host are not the product of isolated genes, but instead emerge from the concerted operation of regulatory networks. Therefore, identifying components and the systemic behavior of networks is necessary to a better understanding of gene regulation and pathogenesis. To this end, I have developed systems biology approaches to study transcriptional and post-transcriptional gene regulation in bacteria, with an emphasis in the human pathogen Mycobacterium tuberculosis (Mtb). First, I developed a network response method to identify parts of the Mtb global transcriptional regulatory network utilized by the pathogen to counteract phagosomal stresses and survive within resting macrophages. As a result, the method unveiled transcriptional regulators and associated regulons utilized by Mtb to establish a successful infection of macrophages throughout the first 14 days of infection. Additionally, this network-based analysis identified the production of Fe-S proteins coupled to lipid metabolism through the alkane hydroxylase complex as a possible strategy employed by Mtb to survive in the host. Second, I developed a network inference method to infer the small non-coding RNA (sRNA) regulatory network in Mtb. The method identifies sRNA-mRNA interactions by integrating a priori knowledge of possible binding sites with structure-driven identification of binding sites. The reconstructed network was useful to predict functional roles for the multitude of sRNAs recently discovered in the pathogen, being that several sRNAs were postulated to be involved in virulence-related processes. Finally, I applied a combined experimental and computational approach to study post-transcriptional repression mediated by small non-coding RNAs in bacteria. Specifically, a probabilistic ranking methodology termed rank-conciliation was developed to infer sRNA-mRNA interactions based on multiple types of data. The method was shown to improve target prediction in Escherichia coli, and therefore is useful to prioritize candidate targets for experimental validation.
Resumo:
To estimate the kinematics of the SIRGAS reference frame, the Deutsches Geodätisches Forschungsinstitut (DGFI) as the IGS Regional Network Associate Analysis Centre for SIRGAS (IGS RNNAC SIR), yearly computes a cumulative (multi-year) solution containing all available weekly solutions delivered by the SIRGAS analysis centres. These cumulative solutions include those models, standards, and strategies widely applied at the time in which they were computed and cover different time spans depending on the availability of the weekly solutions. This data set corresponds to the multi-year solution SIR11P01. It is based on the combination of the weekly normal equations covering the time span from 2000-01-02 (GPS week 1043) to 2011-04-16 (GPS week 1631), when the IGS08 reference frame was introduced. It refers to ITRF2008, epoch 2005.0 and contains 230 stations with 269 occupations. Its precision was estimated to be ±1.0 mm (horizontal) and ±2.4 mm (vertical) for the station positions, and ±0.7 mm/a (horizontal) and ±1.1 mm/a (vertical) for the constant velocities. Computation strategy and results are in detail described in Sánchez and Seitz (2011). The IGS RNAAC SIR computation of the SIRGAS reference frame is possible thanks to the active participation of many Latin American and Caribbean colleagues, who not only make the measurements of the stations available, but also operate SIRGAS analysis centres processing the observational data on a routine basis (more details in http://www.sirgas.org). The achievements of SIRGAS are a consequence of a successful international geodetic cooperation not only following and meeting concrete objectives, but also becoming a permanent and self-sustaining geodetic community to guarantee quality, reliability, and long-term stability of the SIRGAS reference frame. The SIRGAS activities are strongly supported by the International Association of Geodesy (IAG) and the Pan-American Institute for Geography and History (PAIGH). The IGS RNAAC SIR highly appreciates all this support.
Resumo:
Habitat connectivity is important for the survival of species that occupy habitat patches too small to sustain an isolated population. A prominent example of such a species is the European bison (Bison bonasus), occurring only in small, isolated herds, and whose survival will depend on establishing larger, well-connected populations. Our goal here was to assess habitat connectivity of European bison in the Carpathians. We used an existing bison habitat suitability map and data on dispersal barriers to derive cost surfaces, representing the ability of bison to move across the landscape, and to delineate potential connections (as least-cost paths) between currently occupied and potential habitat patches. Graph theory tools were then employed to evaluate the connectivity of all potential habitat patches and their relative importance in the network. Our analysis showed that existing bison herds in Ukraine are isolated. However, we identified several groups of well-connected habitat patches in the Carpathians which could host a large population of European bison. Our analysis also located important dispersal corridors connecting existing herds, and several promising locations for future reintroductions (especially in the Eastern Carpathians) that should have a high priority for conservation efforts. In general, our approach indicates the most important elements within a landscape mosaic for providing and maintaining the overall connectivity of different habitat networks and thus offers a robust and powerful tool for conservation planning.
Resumo:
The problem of fairly distributing the capacity of a network among a set of sessions has been widely studied. In this problem, each session connects via a single path a source and a destination, and its goal is to maximize its assigned transmission rate (i.e., its throughput). Since the links of the network have limited bandwidths, some criterion has to be defined to fairly distribute their capacity among the sessions. A popular criterion is max-min fairness that, in short, guarantees that each session i gets a rate λi such that no session s can increase λs without causing another session s' to end up with a rate λs/ <; λs. Many max-min fair algorithms have been proposed, both centralized and distributed. However, to our knowledge, all proposed distributed algorithms require control data being continuously transmitted to recompute the max-min fair rates when needed (because none of them has mechanisms to detect convergence to the max-min fair rates). In this paper we propose B-Neck, a distributed max-min fair algorithm that is also quiescent. This means that, in absence of changes (i.e., session arrivals or departures), once the max min rates have been computed, B-Neck stops generating network traffic. Quiescence is a key design concept of B-Neck, because B-Neck routers are capable of detecting and notifying changes in the convergence conditions of max-min fair rates. As far as we know, B-Neck is the first distributed max-min fair algorithm that does not require a continuous injection of control traffic to compute the rates. The correctness of B-Neck is formally proved, and extensive simulations are conducted. In them, it is shown that B-Neck converges relatively fast and behaves nicely in presence of sessions arriving and departing.
Resumo:
La demanda de contenidos de vídeo ha aumentado rápidamente en los últimos años como resultado del gran despliegue de la TV sobre IP (IPTV) y la variedad de servicios ofrecidos por los operadores de red. Uno de los servicios que se ha vuelto especialmente atractivo para los clientes es el vídeo bajo demanda (VoD) en tiempo real, ya que ofrece una transmisión (streaming) inmediata de gran variedad de contenidos de vídeo. El precio que los operadores tienen que pagar por este servicio es el aumento del tráfico en las redes, que están cada vez más congestionadas debido a la mayor demanda de contenidos de VoD y al aumento de la calidad de los propios contenidos de vídeo. Así, uno de los principales objetivos de esta tesis es encontrar soluciones que reduzcan el tráfico en el núcleo de la red, manteniendo la calidad del servicio en el nivel adecuado y reduciendo el coste del tráfico. La tesis propone un sistema jerárquico de servidores de streaming en el que se ejecuta un algoritmo para la ubicación óptima de los contenidos de acuerdo con el comportamiento de los usuarios y el estado de la red. Debido a que cualquier algoritmo óptimo de distribución de contenidos alcanza un límite en el que no se puede llegar a nuevas mejoras, la inclusión de los propios clientes del servicio (los peers) en el proceso de streaming puede reducir aún más el tráfico de red. Este proceso se logra aprovechando el control que el operador tiene en las redes de gestión privada sobre los equipos receptores (Set-Top Box) ubicados en las instalaciones de los clientes. El operador se reserva cierta capacidad de almacenamiento y streaming de los peers para almacenar los contenidos de vídeo y para transmitirlos a otros clientes con el fin de aliviar a los servidores de streaming. Debido a la incapacidad de los peers para sustituir completamente a los servidores de streaming, la tesis propone un sistema de streaming asistido por peers. Algunas de las cuestiones importantes que se abordan en la tesis son saber cómo los parámetros del sistema y las distintas distribuciones de los contenidos de vídeo en los peers afectan al rendimiento general del sistema. Para dar respuesta a estas preguntas, la tesis propone un modelo estocástico preciso y flexible que tiene en cuenta parámetros como las capacidades de enlace de subida y de almacenamiento de los peers, el número de peers, el tamaño de la biblioteca de contenidos de vídeo, el tamaño de los contenidos y el esquema de distribución de contenidos para estimar los beneficios del streaming asistido por los peers. El trabajo también propone una versión extendida del modelo matemático mediante la inclusión de la probabilidad de fallo de los peers y su tiempo de recuperación en el conjunto de parámetros del modelo. Estos modelos se utilizan como una herramienta para la realización de exhaustivos análisis del sistema de streaming de VoD asistido por los peers para la amplia gama de parámetros definidos en los modelos. Abstract The demand of video contents has rapidly increased in the past years as a result of the wide deployment of IPTV and the variety of services offered by the network operators. One of the services that has especially become attractive to the customers is real-time Video on Demand (VoD) because it offers an immediate streaming of a large variety of video contents. The price that the operators have to pay for this convenience is the increased traffic in the networks, which are becoming more congested due to the higher demand for VoD contents and the increased quality of the videos. Therefore, one of the main objectives of this thesis is finding solutions that would reduce the traffic in the core of the network, keeping the quality of service on satisfactory level and reducing the traffic cost. The thesis proposes a system of hierarchical structure of streaming servers that runs an algorithm for optimal placement of the contents according to the users’ behavior and the state of the network. Since any algorithm for optimal content distribution reaches a limit upon which no further improvements can be made, including service customers themselves (the peers) in the streaming process can further reduce the network traffic. This process is achieved by taking advantage of the control that the operator has in the privately managed networks over the Set-Top Boxes placed at the clients’ premises. The operator reserves certain storage and streaming capacity on the peers to store the video contents and to stream them to the other clients in order to alleviate the streaming servers. Because of the inability of the peers to completely substitute the streaming servers, the thesis proposes a system for peer-assisted streaming. Some of the important questions addressed in the thesis are how the system parameters and the various distributions of the video contents on the peers would impact the overall system performance. In order to give answers to these questions, the thesis proposes a precise and flexible stochastic model that takes into consideration parameters like uplink and storage capacity of the peers, number of peers, size of the video content library, size of contents and content distribution scheme to estimate the benefits of the peer-assisted streaming. The work also proposes an extended version of the mathematical model by including the failure probability of the peers and their recovery time in the set of parameters. These models are used as tools for conducting thorough analyses of the peer-assisted system for VoD streaming for the wide range of defined parameters.