781 resultados para watermarking protocol
Resumo:
Feature-based image watermarking schemes, which aim to survive various geometric distortions, have attracted great attention in recent years. Existing schemes have shown robustness against rotation, scaling, and translation, but few are resistant to cropping, nonisotropic scaling, random bending attacks (RBAs), and affine transformations. Seo and Yoo present a geometrically invariant image watermarking based on affine covariant regions (ACRs) that provide a certain degree of robustness. To further enhance the robustness, we propose a new image watermarking scheme on the basis of Seo's work, which is insensitive to geometric distortions as well as common image processing operations. Our scheme is mainly composed of three components: 1) feature selection procedure based on graph theoretical clustering algorithm is applied to obtain a set of stable and nonoverlapped ACRs; 2) for each chosen ACR, local normalization, and orientation alignment are performed to generate a geometrically invariant region, which can obviously improve the robustness of the proposed watermarking scheme; and 3) in order to prevent the degradation in image quality caused by the normalization and inverse normalization, indirect inverse normalization is adopted to achieve a good compromise between the imperceptibility and robustness. Experiments are carried out on an image set of 100 images collected from Internet, and the preliminary results demonstrate that the developed method improves the performance over some representative image watermarking approaches in terms of robustness.
Resumo:
Label free electrochemiluminescence (ECL) DNA detection based on catalytic guanine and adenine bases oxidation using tris(2,2'-bipyridyl)ruthenium(II) [Ru(bpy)(3)(2+)] modified glassy carbon (GC) electrode was demonstrated in this work. The modified GC electrode was prepared by casting carbon nanotubes (CNT)/Nafion/Ru(bpy)(3)(2+) composite film on the electrode surface. ECL signals of doublestranded DNA and their thermally denatured counterparts can be distinctly discriminated using cyclic voltammetry (CV) with a low concentration (3.04 x 10(-8) mol/L for Salmon Testes-DNA). Most importantly, sensitive single-base mismatch detection of p53 gene sequence segment was realized with 3.93 x 10(-10) mol/L employing CV stimulation (ECL signal of C/A mismatched DNA oligonucleotides was 1.5-fold higher than that of fully base-paired DNA oligonucleotides). Label free, high sensitivity and simplicity for single-base mismatch discrimination were the main advantages of the present ECL technique for DNA detection over the traditional DNA sensors.
Resumo:
A new set-up was constructed for capillary isoelectric focusing (CIEF) involving a sampling capillary as a bypass fixed to the separation capillary. Sample solutions were subjected to a previously established pH gradient from the sample capillary. Besides performing conventional CIEF, the separation of ampholytic compounds with isoelectric points (p/s) beyond the pH gradient was carried out on this system. This method was termed as pH gradient driven electrophoresis (PGDE) and the basic mathematical expressions were derived to express the dynamic fundamentals. Proteins such as lysozyme, cytochrome C, and pepsin with p/s higher than 10 or below 3 were separated in a pH gradient provided by Pharmalyte (pH 3-10). Finally, this protocol convincingly exhibited its potential in the separation of a solution of chicken egg white.
Resumo:
Low-Power and Lossy-Network (LLN) are usually composed of static nodes, but the increase demand for mobility in mobile robotic and dynamic environment raises the question how a routing protocol for low-power and lossy-networks such as (RPL) would perform if a mobile sink is deployed. In this paper we investigate and evaluate the behaviour of the RPL protocol in fixed and mobile sink environments with respect to different network metrics such as latency, packet delivery ratio (PDR) and energy consumption. Extensive simulation using instant Contiki simulator show significant performance differences between fixed and mobile sink environments. Fixed sink LLNs performed better in terms of average power consumption, latency and packet delivery ratio. The results demonstrated also that RPL protocol is sensitive to mobility and it increases the number of isolated nodes.
Resumo:
Thatcher, Rhys, and Alan Batterham, 'Development and validation of a sport-specific exercise protocol for elite youth soccer players', Journal of Sports Medicine and Physical Fitness, (2004) 44(1) pp.15-22 RAE2008
Resumo:
Formal correctness of complex multi-party network protocols can be difficult to verify. While models of specific fixed compositions of agents can be checked against design constraints, protocols which lend themselves to arbitrarily many compositions of agents-such as the chaining of proxies or the peering of routers-are more difficult to verify because they represent potentially infinite state spaces and may exhibit emergent behaviors which may not materialize under particular fixed compositions. We address this challenge by developing an algebraic approach that enables us to reduce arbitrary compositions of network agents into a behaviorally-equivalent (with respect to some correctness property) compact, canonical representation, which is amenable to mechanical verification. Our approach consists of an algebra and a set of property-preserving rewrite rules for the Canonical Homomorphic Abstraction of Infinite Network protocol compositions (CHAIN). Using CHAIN, an expression over our algebra (i.e., a set of configurations of network protocol agents) can be reduced to another behaviorally-equivalent expression (i.e., a smaller set of configurations). Repeated applications of such rewrite rules produces a canonical expression which can be checked mechanically. We demonstrate our approach by characterizing deadlock-prone configurations of HTTP agents, as well as establishing useful properties of an overlay protocol for scheduling MPEG frames, and of a protocol for Web intra-cache consistency.
Resumo:
We study the impact of heterogeneity of nodes, in terms of their energy, in wireless sensor networks that are hierarchically clustered. In these networks some of the nodes become cluster heads, aggregate the data of their cluster members and transmit it to the sink. We assume that a percentage of the population of sensor nodes is equipped with additional energy resources-this is a source of heterogeneity which may result from the initial setting or as the operation of the network evolves. We also assume that the sensors are randomly (uniformly) distributed and are not mobile, the coordinates of the sink and the dimensions of the sensor field are known. We show that the behavior of such sensor networks becomes very unstable once the first node dies, especially in the presence of node heterogeneity. Classical clustering protocols assume that all the nodes are equipped with the same amount of energy and as a result, they can not take full advantage of the presence of node heterogeneity. We propose SEP, a heterogeneous-aware protocol to prolong the time interval before the death of the first node (we refer to as stability period), which is crucial for many applications where the feedback from the sensor network must be reliable. SEP is based on weighted election probabilities of each node to become cluster head according to the remaining energy in each node. We show by simulation that SEP always prolongs the stability period compared to (and that the average throughput is greater than) the one obtained using current clustering protocols. We conclude by studying the sensitivity of our SEP protocol to heterogeneity parameters capturing energy imbalance in the network. We found that SEP yields longer stability region for higher values of extra energy brought by more powerful nodes.
Resumo:
Wireless sensor networks have recently emerged as enablers of important applications such as environmental, chemical and nuclear sensing systems. Such applications have sophisticated spatial-temporal semantics that set them aside from traditional wireless networks. For example, the computation of temperature averaged over the sensor field must take into account local densities. This is crucial since otherwise the estimated average temperature can be biased by over-sampling areas where a lot more sensors exist. Thus, we envision that a fundamental service that a wireless sensor network should provide is that of estimating local densities. In this paper, we propose a lightweight probabilistic density inference protocol, we call DIP, which allows each sensor node to implicitly estimate its neighborhood size without the explicit exchange of node identifiers as in existing density discovery schemes. The theoretical basis of DIP is a probabilistic analysis which gives the relationship between the number of sensor nodes contending in the neighborhood of a node and the level of contention measured by that node. Extensive simulations confirm the premise of DIP: it can provide statistically reliable and accurate estimates of local density at a very low energy cost and constant running time. We demonstrate how applications could be built on top of our DIP-based service by computing density-unbiased statistics from estimated local densities.
Resumo:
We leverage the buffering capabilities of end-systems to achieve scalable, asynchronous delivery of streams in a peer-to-peer environment. Unlike existing cache-and-relay schemes, we propose a distributed prefetching protocol where peers prefetch and store portions of the streaming media ahead of their playout time, thus not only turning themselves to possible sources for other peers but their prefetched data can allow them to overcome the departure of their source-peer. This stands in sharp contrast to existing cache-and-relay schemes where the departure of the source-peer forces its peer children to go the original server, thus disrupting their service and increasing server and network load. Through mathematical analysis and simulations, we show the effectiveness of maintaining such asynchronous multicasts from several source-peers to other children peers, and the efficacy of prefetching in the face of peer departures. We confirm the scalability of our dPAM protocol as it is shown to significantly reduce server load.
Resumo:
The popularity of TCP/IP coupled with the premise of high speed communication using Asynchronous Transfer Mode (ATM) technology have prompted the network research community to propose a number of techniques to adapt TCP/IP to ATM network environments. ATM offers Available Bit Rate (ABR) and Unspecified Bit Rate (UBR) services for best-effort traffic, such as conventional file transfer. However, recent studies have shown that TCP/IP, when implemented using ABR or UBR, leads to serious performance degradations, especially when the utilization of network resources (such as switch buffers) is high. Proposed techniques-switch-level enhancements, for example-that attempt to patch up TCP/IP over ATMs have had limited success in alleviating this problem. The major reason for TCP/IP's poor performance over ATMs has been consistently attributed to packet fragmentation, which is the result of ATM's 53-byte cell-oriented switching architecture. In this paper, we present a new transport protocol, TCP Boston, that turns ATM's 53-byte cell-oriented switching architecture into an advantage for TCP/IP. At the core of TCP Boston is the Adaptive Information Dispersal Algorithm (AIDA), an efficient encoding technique that allows for dynamic redundancy control. AIDA makes TCP/IP's performance less sensitive to cell losses, thus ensuring a graceful degradation of TCP/IP's performance when faced with congested resources. In this paper, we introduce AIDA and overview the main features of TCP Boston. We present detailed simulation results that show the superiority of our protocol when compared to other adaptations of TCP/IP over ATMs. In particular, we show that TCP Boston improves TCP/IP's performance over ATMs for both network-centric metrics (e.g., effective throughput) and application-centric metrics (e.g., response time).
Resumo:
Formal tools like finite-state model checkers have proven useful in verifying the correctness of systems of bounded size and for hardening single system components against arbitrary inputs. However, conventional applications of these techniques are not well suited to characterizing emergent behaviors of large compositions of processes. In this paper, we present a methodology by which arbitrarily large compositions of components can, if sufficient conditions are proven concerning properties of small compositions, be modeled and completely verified by performing formal verifications upon only a finite set of compositions. The sufficient conditions take the form of reductions, which are claims that particular sequences of components will be causally indistinguishable from other shorter sequences of components. We show how this methodology can be applied to a variety of network protocol applications, including two features of the HTTP protocol, a simple active networking applet, and a proposed web cache consistency algorithm. We also doing discuss its applicability to framing protocol design goals and to representing systems which employ non-model-checking verification methodologies. Finally, we briefly discuss how we hope to broaden this methodology to more general topological compositions of network applications.
Resumo:
We present a transport protocol whose goal is to reduce power consumption without compromising delivery requirements of applications. To meet its goal of energy efficiency, our transport protocol (1) contains mechanisms to balance end-to-end vs. local retransmissions; (2) minimizes acknowledgment traffic using receiver regulated rate-based flow control combined with selected acknowledgements and in-network caching of packets; and (3) aggressively seeks to avoid any congestion-based packet loss. Within a recently developed ultra low-power multi-hop wireless network system, extensive simulations and experimental results demonstrate that our transport protocol meets its goal of preserving the energy efficiency of the underlying network.
Resumo:
Within a recently developed low-power ad hoc network system, we present a transport protocol (JTP) whose goal is to reduce power consumption without trading off delivery requirements of applications. JTP has the following features: it is lightweight whereby end-nodes control in-network actions by encoding delivery requirements in packet headers; JTP enables applications to specify a range of reliability requirements, thus allocating the right energy budget to packets; JTP minimizes feedback control traffic from the destination by varying its frequency based on delivery requirements and stability of the network; JTP minimizes energy consumption by implementing in-network caching and increasing the chances that data retransmission requests from destinations "hit" these caches, thus avoiding costly source retransmissions; and JTP fairly allocates bandwidth among flows by backing off the sending rate of a source to account for in-network retransmissions on its behalf. Analysis and extensive simulations demonstrate the energy gains of JTP over one-size-fits-all transport protocols.
Resumo:
Background: The eliciting dose (ED) for a peanut allergic reaction in 5% of the peanut allergic population, the ED05, is 1.5 mg of peanut protein. This ED05 was derived from oral food challenges (OFC) that use graded, incremental doses administered at fixed time intervals. Individual patients’ threshold doses were used to generate population dose-distribution curves using probability distributions from which the ED05 was then determined. It is important to clinically validate that this dose is predictive of the allergenic response in a further unselected group of peanut-allergic individuals. Methods/Aims: This is a multi-centre study involving three national level referral and teaching centres. (Cork University Hospital, Ireland, Royal Children’s Hospital Melbourne, Australia and Massachusetts General Hospital, Boston, U.S.A.) The study is now in process and will continue to run until all centres have recruited 125 participates in each respective centre. A total of 375 participants, aged 1–18 years will be recruited during routine Allergy appointments in the centres. The aim is to assess the precision of the predicted ED05 using a single dose (6 mg peanut = 1.5 mg of peanut protein) in the form of a cookie. Validated Food Allergy related Quality of Life Questionnaires-(FAQLQ) will be self-administered prior to OFC and 1 month after challenge to assess the impact of a single dose OFC on FAQL. Serological and cell based in vitro studies will be performed. Conclusion: The validation of the ED05 threshold for allergic reactions in peanut allergic subjects has potential value for public health measures. The single dose OFC, based upon the statistical dose-distribution analysis of past challenge trials, promises an efficient approach to identify the most highly sensitive patients within any given food-allergic population.