769 resultados para Tcp


Relevância:

10.00% 10.00%

Publicador:

Resumo:

A cell culture model of the gastric epithelial cell surface would prove useful for biopharmaceutical screening of new chemical entities and dosage forms. A successful model should exhibit tight junction formation, maintenance of differentiation and polarity. Conditions for primary culture of guinea-pig gastric mucous epithelial cell monolayers on Tissue Culture Plastic (TCP) and membrane insects (Transwells) were established. Tight junction formation for cells grown on Transwells for three days was assessed by measurement of transepithelial resistance (TEER) and permeability of mannitol and fluorescein. Coating the polycarbonate filter with collagen IV, rather with collagen I, enhanced tight junction formation. TEER for cells grown on Transwells coated with collagen IV was close to that obtained with intact guinea-pig gastric epithelium in vitro. Differentiation was assessed by incorporation of [3H] glucosamine into glycoprotein and by activity of NADPH oxidase, which produces superoxide. Both of these measures were greater for cells grown on filters coated with collagen I than for cells grown on TCP, but no major difference was found between cells grown on collagens I and IV. However, monolayers grown on membranes coated with collagen IV exhibited apically polarized secretion of mucin and superoxide. The proportion of cells, which stained positively for mucin with periodic Schiff reagent, was greater than 95% for all culture conditions. Gastric epithelial monolayers grown on Transwells coated with collagen IV were able to withstand transient (30 min) apical acidification to pH 3, which was associated with a decrease in [3H] mannitol flux and an increase in TEER relative to pH 7.4. The model was used to provide the first direct demonstration that an NSAID (indomethacin) accumulated in gastric epithelial cells exposed to low apical pH. In conclusion, guinea-pig epithelial cells cultured on collagen IV represent a promising model of the gastric surface epithelium suitable for screening procedures.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The contributions in this research are split in to three distinct, but related, areas. The focus of the work is based on improving the efficiency of video content distribution in the networks that are liable to packet loss, such as the Internet. Initially, the benefits and limitations of content distribution using Forward Error Correction (FEC) in conjunction with the Transmission Control Protocol (TCP) is presented. Since added FEC can be used to reduce the number of retransmissions, the requirement for TCP to deal with any losses is greatly reduced. When real-time applications are needed, delay must be kept to a minimum, and retransmissions not desirable. A balance, therefore, between additional bandwidth and delays due to retransmissions must be struck. This is followed by the proposal of a hybrid transport, specifically for H.264 encoded video, as a compromise between the delay-prone TCP and the loss-prone UDP. It is argued that the playback quality at the receiver often need not be 100% perfect, providing a certain level is assured. Reliable TCP is used to transmit and guarantee delivery of the most important packets. The delay associated with the proposal is measured, and the potential for use as an alternative to the conventional methods of transporting video by either TCP or UDP alone is demonstrated. Finally, a new objective measurement is investigated for assessing the playback quality of video transported using TCP. A new metric is defined to characterise the quality of playback in terms of its continuity. Using packet traces generated from real TCP connections in a lossy environment, simulating the playback of a video is possible, whilst monitoring buffer behaviour to calculate pause intensity values. Subjective tests are conducted to verify the effectiveness of the metric introduced and show that the results of objective and subjective scores made are closely correlated.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper describes work conducted as a joint collaboration between the Virtual Design Team (VDT) research group at Stanford University (USA) , the Systems Engineering Group (SEG) at De Montfort University (UK) and Elipsis Ltd . We describe a new docking methodology in which we combine the use of two radically different types of organizational simulation tool. The VDT simulation tool operates on a standalone computer, and employs computational agents during simulated execution of a pre-defined process model (Kunz, 1998). The other software tool, DREAMS , operates over a standard TCP/IP network, and employs human agents (real people) during a simulated execution of a pre-defined process model (Clegg, 2000).

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The development of the distributed information measurement and control system for optical spectral research of particle beam and plasma objects and the execution of laboratory works on Physics and Engineering Department of Petrozavodsk State University are described. At the hardware level the system is represented by a complex of the automated workplaces joined into computer network. The key element of the system is the communication server, which supports the multi-user mode and distributes resources among clients, monitors the system and provides secure access. Other system components are formed by equipment servers (CАМАC and GPIB servers, a server for the access to microcontrollers MCS-196 and others) and the client programs that carry out data acquisition, accumulation and processing and management of the course of the experiment as well. In this work the designed by the authors network interface is discussed. The interface provides the connection of measuring and executive devices to the distributed information measurement and control system via Ethernet. This interface allows controlling of experimental parameters by use of digital devices, monitoring of experiment parameters by polling of analog and digital sensors. The device firmware is written in assembler language and includes libraries for Ethernet-, IP-, TCP- и UDP-packets forming.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Video streaming via Transmission Control Protocol (TCP) networks has become a popular and highly demanded service, but its quality assessment in both objective and subjective terms has not been properly addressed. In this paper, based on statistical analysis a full analytic model of a no-reference objective metric, namely pause intensity (PI), for video quality assessment is presented. The model characterizes the video playout buffer behavior in connection with the network performance (throughput) and the video playout rate. This allows for instant quality measurement and control without requiring a reference video. PI specifically addresses the need for assessing the quality issue in terms of the continuity in the playout of TCP streaming videos, which cannot be properly measured by other objective metrics such as peak signal-to-noise-ratio, structural similarity, and buffer underrun or pause frequency. The performance of the analytical model is rigidly verified by simulation results and subjective tests using a range of video clips. It is demonstrated that PI is closely correlated with viewers' opinion scores regardless of the vastly different composition of individual elements, such as pause duration and pause frequency which jointly constitute this new quality metric. It is also shown that the correlation performance of PI is consistent and content independent. © 2013 IEEE.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This work looks into video quality assessment applied to the field of telecare and proposes an alternative metric to the more traditionally used PSNR based on the requirements of such an application. We show that the Pause Intensity metric introduced in [1] is also relevant and applicable to heterogeneous networks with a wireless last hop connected to a wired TCP backbone. We demonstrate through our emulation testbed that the impairments experienced in such a network architecture are dominated by continuity based impairments rather than artifacts, such as motion drift or blockiness. We also look into the implication of using Pause Intensity as a metric in terms of the overall video latency, which is potentially problematic should the video be sent and acted upon in real-time. We conclude that Pause Intensity may be used alongside the video characteristics which have been suggested as a measure of the overall video quality. © 2012 IEEE.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this work we deal with video streams over TCP networks and propose an alternative measurement to the widely used and accepted peak signal to noise ratio (PSNR) due to the limitations of this metric in the presence of temporal errors. A test-bed was created to simulate buffer under-run in scalable video streams and the pauses produced as a result of the buffer under-run were inserted into the video before being employed as the subject of subjective testing. The pause intensity metric proposed in [1] was compared with the subjective results and it was shown that in spite of reductions in frame rate and resolution, a correlation with pause intensity still exists. Due to these conclusions, the metric may be employed in layer selection in scalable video streams. © 2011 IEEE.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Hurricanes, earthquakes, floods, and other serious natural hazards have been attributed with causing changes in regional economic growth, income, employment, and wealth. Natural disasters are said to cause; (1) an acceleration of existing economic trends; (2) an expansion of employment and income, due to recovery operations (the so-called silver lining); and (3) an alteration in the structure of regional economic activity due to changes in "intra" and "inter" regional trading patterns, and technological change.^ Theoretical and stylized disaster simulations (Cochrane 1975; Haas, Cochrane, and Kates 1977; Petak et al. 1982; Ellson et al. 1983, 1984; Boisvert 1992; Brookshire and McKee 1992) point towards a wide scope of possible negative and long lasting impacts upon economic activity and structure. This work examines the consequences of Hurricane Andrew on Dade County's economy. Following the work of Ellson et al. (1984), Guimaraes et al. (1993), and West and Lenze (1993; 1994), a regional econometric forecasting model (DCEFM) using a framework of "with" and "without" the hurricane is constructed and utilized to assess Hurricane Andrew's impact on the structure and level of economic activity in Dade County, Florida.^ The results of the simulation exercises show that the direct economic impact associated with Hurricane Andrew on Dade County is of short duration, and of isolated sectoral impact, with impact generally limited to construction, TCP (transportation, communications, and public utilities), and agricultural sectors. Regional growth, and changes in income and employment reacted directly to, and within the range and direction set by national economic activity. The simulations also lead to the conclusion that areal extent, infrastructure, and sector specific damages or impacts, as opposed to monetary losses, are the primary determinants of a disaster's effects upon employment, income, growth, and economic structure. ^

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In recent years, the internet has grown exponentially, and become more complex. This increased complexity potentially introduces more network-level instability. But for any end-to-end internet connection, maintaining the connection's throughput and reliability at a certain level is very important. This is because it can directly affect the connection's normal operation. Therefore, a challenging research task is to improve a network's connection performance by optimizing its throughput and reliability. This dissertation proposed an efficient and reliable transport layer protocol (called concurrent TCP (cTCP)), an extension of the current TCP protocol, to optimize end-to-end connection throughput and enhance end-to-end connection fault tolerance. The proposed cTCP protocol could aggregate multiple paths' bandwidth by supporting concurrent data transfer (CDT) on a single connection. Here concurrent data transfer was defined as the concurrent transfer of data from local hosts to foreign hosts via two or more end-to-end paths. An RTT-Based CDT mechanism, which was based on a path's RTT (Round Trip Time) to optimize CDT performance, was developed for the proposed cTCP protocol. This mechanism primarily included an RTT-Based load distribution and path management scheme, which was used to optimize connections' throughput and reliability. A congestion control and retransmission policy based on RTT was also provided. According to experiment results, under different network conditions, our RTT-Based CDT mechanism could acquire good CDT performance. Finally a CWND-Based CDT mechanism, which was based on a path's CWND (Congestion Window), to optimize CDT performance was introduced. This mechanism primarily included: a CWND-Based load allocation scheme, which assigned corresponding data to paths based on their CWND to achieve aggregate bandwidth; a CWND-Based path management, which was used to optimize connections' fault tolerance; and a congestion control and retransmission management policy, which was similar to regular TCP in its separate path handling. According to corresponding experiment results, this mechanism could acquire near-optimal CDT performance under different network conditions.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Today, the development of domain-specific communication applications is both time-consuming and error-prone because the low-level communication services provided by the existing systems and networks are primitive and often heterogeneous. Multimedia communication applications are typically built on top of low-level network abstractions such as TCP/UDP socket, SIP (Session Initiation Protocol) and RTP (Real-time Transport Protocol) APIs. The User-centric Communication Middleware (UCM) is proposed to encapsulate the networking complexity and heterogeneity of basic multimedia and multi-party communication for upper-layer communication applications. And UCM provides a unified user-centric communication service to diverse communication applications ranging from a simple phone call and video conferencing to specialized communication applications like disaster management and telemedicine. It makes it easier to the development of domain-specific communication applications. The UCM abstraction and API is proposed to achieve these goals. The dissertation also tries to integrate the formal method into UCM development process. The formal model is created for UCM using SAM methodology. Some design errors are found during model creation because the formal method forces to give the precise description of UCM. By using the SAM tool, formal UCM model is translated to Promela formula model. In the dissertation, some system properties are defined as temporal logic formulas. These temporal logic formulas are manually translated to promela formulas which are individually integrated with promela formula model of UCM and verified using SPIN tool. Formal analysis used here helps verify the system properties (for example multiparty multimedia protocol) and dig out the bugs of systems.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The lack of analytical models that can accurately describe large-scale networked systems makes empirical experimentation indispensable for understanding complex behaviors. Research on network testbeds for testing network protocols and distributed services, including physical, emulated, and federated testbeds, has made steady progress. Although the success of these testbeds is undeniable, they fail to provide: 1) scalability, for handling large-scale networks with hundreds or thousands of hosts and routers organized in different scenarios, 2) flexibility, for testing new protocols or applications in diverse settings, and 3) inter-operability, for combining simulated and real network entities in experiments. This dissertation tackles these issues in three different dimensions. First, we present SVEET, a system that enables inter-operability between real and simulated hosts. In order to increase the scalability of networks under study, SVEET enables time-dilated synchronization between real hosts and the discrete-event simulator. Realistic TCP congestion control algorithms are implemented in the simulator to allow seamless interactions between real and simulated hosts. SVEET is validated via extensive experiments and its capabilities are assessed through case studies involving real applications. Second, we present PrimoGENI, a system that allows a distributed discrete-event simulator, running in real-time, to interact with real network entities in a federated environment. PrimoGENI greatly enhances the flexibility of network experiments, through which a great variety of network conditions can be reproduced to examine what-if questions. Furthermore, PrimoGENI performs resource management functions, on behalf of the user, for instantiating network experiments on shared infrastructures. Finally, to further increase the scalability of network testbeds to handle large-scale high-capacity networks, we present a novel symbiotic simulation approach. We present SymbioSim, a testbed for large-scale network experimentation where a high-performance simulation system closely cooperates with an emulation system in a mutually beneficial way. On the one hand, the simulation system benefits from incorporating the traffic metadata from real applications in the emulation system to reproduce the realistic traffic conditions. On the other hand, the emulation system benefits from receiving the continuous updates from the simulation system to calibrate the traffic between real applications. Specific techniques that support the symbiotic approach include: 1) a model downscaling scheme that can significantly reduce the complexity of the large-scale simulation model, resulting in an efficient emulation system for modulating the high-capacity network traffic between real applications; 2) a queuing network model for the downscaled emulation system to accurately represent the network effects of the simulated traffic; and 3) techniques for reducing the synchronization overhead between the simulation and emulation systems.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Network simulation is an indispensable tool for studying Internet-scale networks due to the heterogeneous structure, immense size and changing properties. It is crucial for network simulators to generate representative traffic, which is necessary for effectively evaluating next-generation network protocols and applications. With network simulation, we can make a distinction between foreground traffic, which is generated by the target applications the researchers intend to study and therefore must be simulated with high fidelity, and background traffic, which represents the network traffic that is generated by other applications and does not require significant accuracy. The background traffic has a significant impact on the foreground traffic, since it competes with the foreground traffic for network resources and therefore can drastically affect the behavior of the applications that produce the foreground traffic. This dissertation aims to provide a solution to meaningfully generate background traffic in three aspects. First is realism. Realistic traffic characterization plays an important role in determining the correct outcome of the simulation studies. This work starts from enhancing an existing fluid background traffic model by removing its two unrealistic assumptions. The improved model can correctly reflect the network conditions in the reverse direction of the data traffic and can reproduce the traffic burstiness observed from measurements. Second is scalability. The trade-off between accuracy and scalability is a constant theme in background traffic modeling. This work presents a fast rate-based TCP (RTCP) traffic model, which originally used analytical models to represent TCP congestion control behavior. This model outperforms other existing traffic models in that it can correctly capture the overall TCP behavior and achieve a speedup of more than two orders of magnitude over the corresponding packet-oriented simulation. Third is network-wide traffic generation. Regardless of how detailed or scalable the models are, they mainly focus on how to generate traffic on one single link, which cannot be extended easily to studies of more complicated network scenarios. This work presents a cluster-based spatio-temporal background traffic generation model that considers spatial and temporal traffic characteristics as well as their correlations. The resulting model can be used effectively for the evaluation work in network studies.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Supervisory Control & Data Acquisition (SCADA) systems are used by many industries because of their ability to manage sensors and control external hardware. The problem with commercially available systems is that they are restricted to a local network of users that use proprietary software. There was no Internet development guide to give remote users out of the network, control and access to SCADA data and external hardware through simple user interfaces. To solve this problem a server/client paradigm was implemented to make SCADAs available via the Internet. Two methods were applied and studied: polling of a text file as a low-end technology solution and implementing a Transmission Control Protocol (TCP/IP) socket connection. Users were allowed to login to a website and control remotely a network of pumps and valves interfaced to a SCADA. This enabled them to sample the water quality of different reservoir wells. The results were based on real time performance, stability and ease of use of the remote interface and its programming. These indicated that the most feasible server to implement is the TCP/IP connection. For the user interface, Java applets and Active X controls provide the same real time access.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The ability of a previously PCB-enriched microbial culture from Venice Lagoon marine sediments to dechlorinate pentachlorophenol (PCP) and 2,3,5-trichlorophenol (2,3,5-TCP) was confirmed under anaerobic conditions in microcosms consisting of site water and sediment. Dechlorination activities against Aroclor 1254 PCB mixture were also confirmed as control. Pentachlorophenol was degraded to 2,4,6-TCP (75.92±0.85 mol%), 3,5-DCP (6.40±0.75 mol%), and phenol (15.40±0.87 mol%). From the distribution of the different dechlorination products accumulated in the PCP-spiked cultures over time, two dechlorination pathways for PCP were proposed: (i) PCP to 2,3,4,6-TeCP, then to 2,4,6-TCP through the removal of both meta double-flanked chlorine substituents (main pathway); (ii) alternately, PCP to 2,3,5,6-TeCP, 2,3,5-TCP, 3,5-DCP, then phenol, through the removal of the para double-flanked chlorine, followed by ortho single-flanked chlorines, and finally meta unflanked chlorines (minor pathway). Removal of meta double-flanked chlorines is thus preferred over all other substituents. 2,3,5-TCP, that completely lacks double-flanked chlorines, was degraded to 3,5-DCP through removal of the ortho single-flanked chlorine, with a 99.6% reduction in initial concentration of 2,3,5-TCP by week 14. 16S rRNA PCR-DGGE using Chloroflexi-specific primers revealed a different role of the two microorganisms VLD-1 and VLD-2, previously identified as dechlorinators in the Aroclor 1254 PCB-enriched community, in the dehalogenation of chlorophenols. VLD-1 was observed both in PCP- and TCP-dechlorinating communities, whereas VLD-2 only in TCP-dechlorinating communities. This indicates that VLD-1 and VLD-2 may both dechlorinate ortho single-flanked chlorines, but only VLD-1 is able to remove double-flanked meta or para chlorines.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Cloud computing enables independent end users and applications to share data and pooled resources, possibly located in geographically distributed Data Centers, in a fully transparent way. This need is particularly felt by scientific applications to exploit distributed resources in efficient and scalable way for the processing of big amount of data. This paper proposes an open so- lution to deploy a Platform as a service (PaaS) over a set of multi- site data centers by applying open source virtualization tools to facilitate operation among virtual machines while optimizing the usage of distributed resources. An experimental testbed is set up in Openstack environment to obtain evaluations with different types of TCP sample connections to demonstrate the functionality of the proposed solution and to obtain throughput measurements in relation to relevant design parameters.