838 resultados para End-to-side neurorrhaphy


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Policy-based management is considered an effective approach to address the challenges of resource management in large complex networks. Within the IU-ATC QoS Frameworks project, a policy-based network management framework, CNQF (Converged Networks QoS Framework) is being developed aimed at providing context-aware, end-to-end QoS control and resource management in converged next generation networks. CNQF is designed to provide homogeneous, transparent QoS control over heterogeneous access technologies by means of distributed functional entities that co-ordinate the resources of the transport network through policy-driven decisions. In this paper, we present a measurement-based evaluation of policy-driven QoS management based on CNQF architecture, with real traffic flows on an experimental testbed. A Java based implementation of the CNQF Resource Management Subsystem is deployed on the testbed and results of the experiments validate the framework operation for policy-based QoS management of real traffic flows.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper presents and investigates a dynamic
buffer management scheme for QoS control of multimedia
services in a 3.5G wireless system i.e. the High Speed Downlink
Packet Access (HSDPA). HSDPA was introduced to enhance
UMTS for high-speed packet switched services. With HSDPA,
packet scheduling and HARQ mechanisms in the base station
require data buffering at the air interface thus introducing a
potential bottleneck to end-to-end communication. Hence, for
multimedia services with multiplexed parallel diverse flows
such as video and data in the same end-user session, buffer
management schemes in the base station are essential to support
end-to-end QoS provision. We propose a dynamic buffer management
scheme for HSDPA multimedia sessions with aggregated real-time and non real-time flows in the paper. The end-to-end performance impact of the scheme is evaluated with an example multimedia session comprising a real-time streaming
flow concurrent with TCP-based non real-time flow via extensive HSDPA simulations. Results demonstrate that the scheme can guarantee the end-to-end QoS of the real-time streaming flow, whilst simultaneously protecting non real-time flow from starvation resulting in improved end-to-end throughput performance

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We examine the impact of transmit antenna selection with receive generalized selection combining (TAS/GSC) for cognitive decode-and-forward (DF) relaying in Nakagami-m fading channels. We select a single transmit antenna at the secondary transmitter which maximizes the receive signal-to-noise ratio (SNR) and combine a subset of receive antennas with the largest SNRs at the secondary receiver. In an effort to assess the performance, we first derive the probability density function and cumulative distribution function of the end-to-end SNR using the moment generating function. We then derive new exact closed-form expression for the ergodic capacity. More importantly, by deriving the asymptotic expression for the high SNR approximation of the ergodic capacity, we gather deep insights into the high SNR slope and the power offset. Our results show that the high SNR slope is 1/2 under the proportional interference power constraint. Under the fixed interference power constraint, the high SNR slope is zero.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Physical transceivers have hardware impairments that create distortions which degrade the performance of communication systems. The vast majority of technical contributions in the area of relaying neglect hardware impairments and, thus, assume ideal hardware. Such approximations make sense in low-rate systems, but can lead to very misleading results when analyzing future high-rate systems. This paper quantifies the impact of hardware impairments on dual-hop relaying, for both amplify-and-forward and decode-and-forward protocols. The outage probability (OP) in these practical scenarios is a function of the effective end-to-end signal-to-noise-and-distortion ratio (SNDR). This paper derives new closed-form expressions for the exact and asymptotic OPs, accounting for hardware impairments at the source, relay, and destination. A similar analysis for the ergodic capacity is also pursued, resulting in new upper bounds. We assume that both hops are subject to independent but non-identically distributed Nakagami-m fading. This paper validates that the performance loss is small at low rates, but otherwise can be very substantial. In particular, it is proved that for high signal-to-noise ratio (SNR), the end-to-end SNDR converges to a deterministic constant, coined the SNDR ceiling, which is inversely proportional to the level of impairments. This stands in contrast to the ideal hardware case in which the end-to-end SNDR grows without bound in the high-SNR regime. Finally, we provide fundamental design guidelines for selecting hardware that satisfies the requirements of a practical relaying system.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This work investigates the end-to-end performance of randomized distributed space-time codes with complex Gaussian distribution, when employed in a wireless relay network. The relaying nodes are assumed to adopt a decode-and-forward strategy and transmissions are affected by small and large scale fading phenomena. Extremely tight, analytical approximations of the end-to-end symbol error probability and of the end-to-end outage probability are derived and successfully validated through Monte-Carlo simulation. For the high signal-to-noise ratio regime, a simple, closed-form expression for the symbol error probability is further provided.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The exponential growth in user and application data entails new means for providing fault tolerance and protection against data loss. High Performance Com- puting (HPC) storage systems, which are at the forefront of handling the data del- uge, typically employ hardware RAID at the backend. However, such solutions are costly, do not ensure end-to-end data integrity, and can become a bottleneck during data reconstruction. In this paper, we design an innovative solution to achieve a flex- ible, fault-tolerant, and high-performance RAID-6 solution for a parallel file system (PFS). Our system utilizes low-cost, strategically placed GPUs — both on the client and server sides — to accelerate parity computation. In contrast to hardware-based approaches, we provide full control over the size, length and location of a RAID array on a per file basis, end-to-end data integrity checking, and parallelization of RAID array reconstruction. We have deployed our system in conjunction with the widely-used Lustre PFS, and show that our approach is feasible and imposes ac- ceptable overhead.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper, we investigate the end-to-end performance of dual-hop proactive decode-and-forward relaying networks with Nth best relay selection in the presence of two practical deleterious effects: i) hardware impairment and ii) cochannel interference. In particular, we derive new exact and asymptotic closed-form expressions for the outage probability and average channel capacity of Nth best partial and opportunistic relay selection schemes over Rayleigh fading channels. Insightful discussions are provided. It is shown that, when the system cannot select the best relay for cooperation, the partial relay selection scheme outperforms the opportunistic method under the impact of the same co-channel interference (CCI). In addition, without CCI but under the effect of hardware impairment, it is shown that both selection strategies have the same asymptotic channel capacity. Monte Carlo simulations are presented to corroborate our analysis.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

To cope with the rapid growth of multimedia applications that requires dynamic levels of quality of service (QoS), cross-layer (CL) design, where multiple protocol layers are jointly combined, has been considered to provide diverse QoS provisions for mobile multimedia networks. However, there is a lack of a general mathematical framework to model such CL scheme in wireless networks with different types of multimedia classes. In this paper, to overcome this shortcoming, we therefore propose a novel CL design for integrated real-time/non-real-time traffic with strict preemptive priority via a finite-state Markov chain. The main strategy of the CL scheme is to design a Markov model by explicitly including adaptive modulation and coding at the physical layer, queuing at the data link layer, and the bursty nature of multimedia traffic classes at the application layer. Utilizing this Markov model, several important performance metrics in terms of packet loss rate, delay, and throughput are examined. In addition, our proposed framework is exploited in various multimedia applications, for example, the end-to-end real-time video streaming and CL optimization, which require the priority-based QoS adaptation for different applications. More importantly, the CL framework reveals important guidelines as to optimize the network performance

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Background This study evaluated the effect of statins in Primary biliary cirrhosis (PBC) on endothelial function, anti-oxidant status and vascular compliance. Methods Primary biliary cirrhosis patients with hypercholesterolaemia were randomized to receive 20mg simvastatin or placebo in a single blind, randomized controlled trial. Body mass index, blood pressure, glucose, liver function, lipid profile, immunoglobulin levels, serological markers of endothelial function and anti-oxidant status were measured as well as vascular compliance, calculated from pulse wave analysis and velocity, at recruitment and again at 3, 6, 9 and 12months. Results Twenty-one PBC patients (F=20, mean age = 55) were randomized to simvastatin 20mg (n=11) or matched placebo (n=10). At completion of the trial, serum cholesterol levels in the simvastatin group were significantly lower compared with the placebo group (4.91mmol/L vs. 6.15mmol/L, P=0.01). Low-density lipoprotein (LDL) levels after 12months were also significantly lower in the simvastatin group (2.33mmol/L vs. 3.53mmol/L, P=0.01). After 12months of treatment, lipid hydroperoxides were lower (0.49mol/L vs. 0.59mol/L, P=0.10) while vitamin C levels were higher (80.54mol/L vs. 77.40mol/L, P=0.95) in the simvastatin group. Pulse wave velocity remained similar between treatment groups at 12months (8.45m/s vs. 8.80m/s, P=0.66). Only one patient discontinued medication owing to side effects. No deterioration in liver transaminases was noted in the simvastatin group. Conclusions Statin therapy in patients with PBC appears safe and effective towards overall reductions in total cholesterol and LDL levels. Our initial study suggests that simvastatin may also confer advantageous effects on endothelial function and antioxidant status.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This special issue provides the latest research and development on wireless mobile wearable communications. According to a report by Juniper Research, the market value of connected wearable devices is expected to reach $1.5 billion by 2014, and the shipment of wearable devices may reach 70 million by 2017. Good examples of wearable devices are the prominent Google Glass and Microsoft HoloLens. As wearable technology is rapidly penetrating our daily life, mobile wearable communication is becoming a new communication paradigm. Mobile wearable device communications create new challenges compared to ordinary sensor networks and short-range communication. In mobile wearable communications, devices communicate with each other in a peer-to-peer fashion or client-server fashion and also communicate with aggregation points (e.g., smartphones, tablets, and gateway nodes). Wearable devices are expected to integrate multiple radio technologies for various applications' needs with small power consumption and low transmission delays. These devices can hence collect, interpret, transmit, and exchange data among supporting components, other wearable devices, and the Internet. Such data are not limited to people's personal biomedical information but also include human-centric social and contextual data. The success of mobile wearable technology depends on communication and networking architectures that support efficient and secure end-to-end information flows. A key design consideration of future wearable devices is the ability to ubiquitously connect to smartphones or the Internet with very low energy consumption. Radio propagation and, accordingly, channel models are also different from those in other existing wireless technologies. A huge number of connected wearable devices require novel big data processing algorithms, efficient storage solutions, cloud-assisted infrastructures, and spectrum-efficient communications technologies.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Masked implementations of cryptographic algorithms are often used in commercial embedded cryptographic devices to increase their resistance to side channel attacks. In this work we show how neural networks can be used to both identify the mask value, and to subsequently identify the secret key value with a single attack trace with high probability. We propose the use of a pre-processing step using principal component analysis (PCA) to significantly increase the success of the attack. We have developed a classifier that can correctly identify the mask for each trace, hence removing the security provided by that mask and reducing the attack to being equivalent to an attack against an unprotected implementation. The attack is performed on the freely available differential power analysis (DPA) contest data set to allow our work to be easily reproducible. We show that neural networks allow for a robust and efficient classification in the context of side-channel attacks.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

With the development and deployment of IEC 61850 based smart substations, cybersecurity vulnerabilities of supervisory control and data acquisition (SCADA) systems are increasingly emerging. In response to the emergence of cybersecurity vulnerabilities in smart substations, a test-bed is indispensable to enable cybersecurity experimentation. In this paper, a comprehensive and realistic cyber-physical test-bed has been built to investigate potential cybersecurity vulnerabilities and the impact of cyber-attacks on IEC 61850 based smart substations. This test-bed is close to a real production type environment, and has the ability to carry out end-to-end testing of cyber-attacks and physical consequences. A fuzz testing approach is proposed for detecting IEC 61850 based intelligent electronic devices (IEDs) and validated in the proposed test-bed.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A multiuser dual-hop relaying system over mixed radio frequency/free-space optical (RF/FSO) links is investigated. Specifically, the system consists of m single-antenna sources, a relay node equipped with n≥ m receive antennas and a single photo-aperture transmitter, and one destination equipped with a single photo-detector. RF links are used for the simultaneous data transmission from multiple sources to the relay. The relay operates under the decode-and-forward protocol and utilizes the popular V-BLAST technique by successively decoding each user's transmitted stream. Two common norm-based orderings are adopted, i.e., the streams are decoded in an ascending or a descending order. After V-BLAST, the relay retransmits the decoded information to the destination via a point-to-point FSO link in m consecutive timeslots. Analytical expressions for the end-to-end outage probability and average symbol error probability of each user are derived, while closed-form asymptotic expressions are also presented. Capitalizing on the derived results, some engineering insights are manifested, such as the coding and diversity gain of each user, the impact of the pointing error displacement on the FSO link and the V-BLAST ordering effectiveness at the relay.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Difficult-to-treat asthma affects up to 20% of patients with asthma and is associated with significant healthcare cost. It is an umbrella term that defines a heterogeneous clinical problem including incorrect diagnosis, comorbid conditions and treatment non-adherence; when these are effectively addressed, good symptom control is frequently achieved. However, in 3–5% of adults with difficult-to-treat asthma, the problem is severe disease that is unresponsive to currently available treatments. Current treatment guidelines advise the ‘stepwise’ increase of corticosteroids, but it is now recognised that many aspects of asthma are not corticosteroid responsive, and that this ‘one size fits all’ approach does not deliver clinical benefit in many patients and can also lead to side effects. The future of management of severe asthma will involve optimisation with currently available treatments, particularly corticosteroids, including addressing non-adherence and defining an ‘optimised’ corticosteroid dose, allied with the use of ‘add-on’ target-specific novel treatments. This review examines the current status of novel treatments and research efforts to identify novel targets in the era of stratified medicines in severe asthma.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This study introduces an inexact, but ultra-low power, computing architecture devoted to the embedded analysis of bio-signals. The platform operates at extremely low voltage supply levels to minimise energy consumption. In this scenario, the reliability of static RAM (SRAM) memories cannot be guaranteed when using conventional 6-transistor implementations. While error correction codes and dedicated SRAM implementations can ensure correct operations in this near-threshold regime, they incur in significant area and energy overheads, and should therefore be employed judiciously. Herein, the authors propose a novel scheme to design inexact computing architectures that selectively protects memory regions based on their significance, i.e. their impact on the end-to-end quality of service, as dictated by the bio-signal application characteristics. The authors illustrate their scheme on an industrial benchmark application performing the power spectrum analysis of electrocardiograms. Experimental evidence showcases that a significance-based memory protection approach leads to a small degradation in the output quality with respect to an exact implementation, while resulting in substantial energy gains, both in the memory and the processing subsystem.