838 resultados para end-to-side
Resumo:
What-if Simulations have been identified as one solution for business performance related decision support. Such support is especially useful in cases where it can be automatically generated out of Business Process Management (BPM) Environments from the existing business process models and performance parameters monitored from the executed business process instances. Currently, some of the available BPM Environments offer basic-level performance prediction capabilities. However, these functionalities are normally too limited to be generally useful for performance related decision support at business process level. In this paper, an approach is presented which allows the non-intrusive integration of sophisticated tooling for what-if simulations, analytic performance prediction tools process optimizations or a combination Of Such solutions into already existing BPM environments. The approach abstracts from process modelling techniques which enable automatic decision support spanning processes across numerous BPM Environments. For instance, this enables end-to-end decision support for composite processes modelled with the Business Process Modelling Notation (BPMN) on top of existing Enterprise Resource Planning (ERP) processes modelled with proprietary languages.
Resumo:
A new configurable architecture is presented that offers multiple levels of video playback by accommodating variable levels of network utilization and bandwidth. By utilizing scalable MPEG-4 encoding at the network edge and using specific video delivery protocols, media streaming components are merged to fully optimize video playback for IPv6 networks, thus improving QoS. This is achieved by introducing “programmable network functionality” (PNF) which splits layered video transmission and distributes it evenly over available bandwidth, reducing packet loss and delay caused by out-of-profile DiffServ classes. An FPGA design is given which gives improved performance, e.g. link utilization, end-to-end delay, and that during congestion, improves on-time delivery of video frames by up to 80% when compared to current “static” DiffServ.
Resumo:
An architecture to simultaneously affect both amplitude and phase control from a reflectarray element using an impedance transformation unit is demonstrated. It is shown that a wide range of control is possible from a single element, removing the conventional necessity for variable sized elements across an array in order to form a desired reflectarray far-field pattern. Parallel plate waveguide measurements for a 2.2 GHz prototype element validate the phase and amplitude variation available from the element. It is demonstrated that there is sufficient control of the element's reflection response to allow Dolph-Tschebyscheff weighting coefficients for major-lobe to side-lobe ratios of up to 36 dB to be implemented.
Resumo:
Cooperative MIMO (Multiple Input–Multiple Output) allows multiple nodes share their antennas to emulate antenna arrays and transmit or receive cooperatively. It has the ability to increase the capacity for future wireless communication systems and it is particularly suited for ad hoc networks. In this study, based on the transmission procedure of a typical cooperative MIMO system, we first analyze the capacity of single-hop cooperative MIMO systems, and then we derive the optimal resource allocation strategy to maximize the end-to-end capacity in multi-hop cooperative MIMO systems. The study shows three implications. First, only when the intra-cluster channel is better than the inter-cluster channel, cooperative MIMO results in a capacity increment. Second, for a given scenario there is an optimal number of cooperative nodes. For instance, in our study an optimal deployment of three cooperative nodes achieve a capacity increment of 2 bps/Hz when compared with direct transmission. Third, an optimal resource allocation strategy plays a significant role in maximizing end-to-end capacity in multi-hop cooperative MIMO systems. Numerical results show that when optimal resource allocation is applied we achieve more than 20% end-to-end capacity increment in average when compared with an equal resource allocation strategy.
Acoustic solitary waves in dusty and/or multi-ion plasmas with cold, adiabatic, and hot constituents
Resumo:
Large nonlinear acoustic waves are discussed in a four-component plasma, made up of two superhot isothermal species, and two species with lower thermal velocities, being, respectively, adiabatic and cold. First a model is considered in which the isothermal species are electrons and ions, while the cooler species are positive and/or negative dust. Using a Sagdeev pseudopotential formalism, large dust-acoustic structures have been studied in a systematic way, to delimit the compositional parameter space in which they can be found, without restrictions on the charges and masses of the dust species and their charge signs. Solitary waves can only occur for nonlinear structure velocities smaller than the adiabatic dust thermal velocity, leading to a novel dust-acoustic-like mode based on the interplay between the two dust species. If the cold and adiabatic dust are oppositely charged, only solitary waves exist, having the polarity of the cold dust, their parameter range being limited by infinite compression of the cold dust. However, when the charges of the cold and adiabatic species have the same sign, solitary structures are limited for increasing Mach numbers successively by infinite cold dust compression, by encountering the adiabatic dust sonic point, and by the occurrence of double layers. The latter have, for smaller Mach numbers, the same polarity as the charged dust, but switch at the high Mach number end to the opposite polarity. Typical Sagdeev pseudopotentials and solitary wave profiles have been presented. Finally, the analysis has nowhere used the assumption that the dust would be much more massive than the ions and hence, one or both dust species can easily be replaced by positive and/or negative ions and the conclusions will apply to that plasma model equally well. This would cover a number of different scenarios, such as, for example, very hot electrons and ions, together with a mix of adiabatic ions and dust (of either polarity) or a very hot electron-positron mix, together with a two-ion mix or together with adiabatic ions and cold dust (both of either charge sign), to name but some of the possible plasma compositions.
Resumo:
Background: This is an update of a previous review (McGuinness 2006). Hypertension and cognitive impairment are prevalent in older people. Hypertension is a direct risk factor for vascular dementia (VaD) and recent studies have suggested hypertension impacts upon prevalence of Alzheimer's disease (AD). Therefore does treatment of hypertension prevent cognitive decline?
Objectives: To assess the effects of blood pressure lowering treatments for the prevention of dementia and cognitive decline in patients with hypertension but no history of cerebrovascular disease.
Search strategy: The Specialized Register of the Cochrane Dementia and Cognitive Improvement Group, The Cochrane Library, MEDLINE, EMBASE, PsycINFO, CINAHL, LILACS as well as many trials databases and grey literature sources were searched on 13 February 2008 using the terms: hypertens$ OR anti-hypertens$. Selection criteria: Randomized, double-blind, placebo controlled trials in which pharmacological or non-pharmacological interventions to lower blood pressure were given for at least six months.
Data collection and analysis: Two independent reviewers assessed trial quality and extracted data. The following outcomes were assessed: incidence of dementia, cognitive change from baseline, blood pressure level, incidence and severity of side effects and quality of life.
Main results: Four trials including 15,936 hypertensive subjects were identified. Average age was 75.4 years. Mean blood pressure at entry across the studies was 171/86 mmHg. The combined result of the four trials reporting incidence of dementia indicated no significant difference between treatment and placebo (236/7767 versus 259/7660, Odds Ratio (OR) = 0.89, 95% CI 0.74, 1.07) and there was considerable heterogeneity between the trials. The combined results from the three trials reporting change in Mini Mental State Examination (MMSE) did not indicate a benefit from treatment (Weighted Mean Difference (WMD) = 0.42, 95%CI 0.30, 0.53). Both systolic and diastolic blood pressure levels were reduced significantly in the three trials assessing this outcome (WMD = -10.22, 95% CI -10.78, -9.66 for systolic blood pressure, WMD = -4.28, 95% CI -4.58, -3.98 for diastolic blood pressure). Three trials reported adverse effects requiring discontinuation of treatment and the combined results indicated no significant difference (OR = 1.01, 95% CI 0.92, 1.11). When analysed separately, however, more patients on placebo in Syst Eur 1997 were likely to discontinue treatment due to side effects; the converse was true in SHEP 1991. Quality of life data could not be analysed in the four studies. Analysis of the included studies in this review was problematic as many of the control subjects received antihypertensive treatment because their blood pressures exceeded pre-set values. In most cases the study became a comparison between the study drug against a usual antihypertensive regimen.
Authors' conclusions: There is no convincing evidence fromthe trials identified that blood pressure lowering in late-life prevents the development of dementia or cognitive impairment in hypertensive patients with no apparent prior cerebrovascular disease. There were significant problems identified with analysing the data, however, due to the number of patients lost to follow-up and the number of placebo patients who received active treatment. This introduced bias. More robust results may be obtained by conducting a meta-analysis using individual patient data.
Resumo:
To optimize the performance of wireless networks, one needs to consider the impact of key factors such as interference from hidden nodes, the capture effect, the network density and network conditions (saturated versus non-saturated). In this research, our goal is to quantify the impact of these factors and to propose effective mechanisms and algorithms for throughput guarantees in multi-hop wireless networks. For this purpose, we have developed a model that takes into account all these key factors, based on which an admission control algorithm and an end-to-end available bandwidth estimation algorithm are proposed. Given the necessary network information and traffic demands as inputs, these algorithms are able to provide predictive control via an iterative approach. Evaluations using analytical comparison with simulations as well as existing research show that the proposed model and algorithms are accurate and effective.
Resumo:
Individuals who have been subtly reminded of death display heightened in-group favouritism, or “worldview defense.” Terror management theory argues (i) that death cues engender worldview defense via psychological mechanisms specifically evolved to suppress death anxiety, and (ii) that the core function of religiosity is to suppress death anxiety. Thus, terror management theory predicts that extremely religious individuals will not evince worldview defense. Here, two studies are presented in support of an alternative perspective. According to the unconscious vigilance hypothesis, subtly processed threats (which need not pertain to death) heighten sensitivity to affectively valenced stimuli (which need not pertain to cultural attitudes). From this perspective, religiosity mitigates the influence of mortality-salience only insofar as afterlife doctrines reduce the perceived threat posed by death. Tibetan Buddhism portrays death as a perilous gateway to rebirth rather than an end to suffering; faith in this doctrine should therefore not be expected to nullify mortality-salience effects. In Study 1, devout Tibetan Buddhists who were subtly reminded of death produced exaggerated aesthetic ratings unrelated to cultural worldviews. In Study 2, devout Tibetan Buddhists produced worldview defense following subliminal exposure to non-death cues of threat. The results demonstrate both the domain-generality of the process underlying worldview defense and the importance of religious doctrinal content in moderating mortality-salience effects.
Resumo:
Fixed and wireless networks are increasingly converging towards common connectivity with IP-based core networks. Providing effective end-to-end resource and QoS management in such complex heterogeneous converged network scenarios requires unified, adaptive and scalable solutions to integrate and co-ordinate diverse QoS mechanisms of different access technologies with IP-based QoS. Policy-Based Network Management (PBNM) is one approach that could be employed to address this challenge. Hence, a policy-based framework for end-to-end QoS management in converged networks, CNQF (Converged Networks QoS Management Framework) has been proposed within our project. In this paper, the CNQF architecture, a Java implementation of its prototype and experimental validation of key elements are discussed. We then present a fuzzy-based CNQF resource management approach and study the performance of our implementation with real traffic flows on an experimental testbed. The results demonstrate the efficacy of our resource-adaptive approach for practical PBNM systems
Resumo:
Policy-based network management (PBNM) paradigms provide an effective tool for end-to-end resource
management in converged next generation networks by enabling unified, adaptive and scalable solutions
that integrate and co-ordinate diverse resource management mechanisms associated with heterogeneous
access technologies. In our project, a PBNM framework for end-to-end QoS management in converged
networks is being developed. The framework consists of distributed functional entities managed within a
policy-based infrastructure to provide QoS and resource management in converged networks. Within any
QoS control framework, an effective admission control scheme is essential for maintaining the QoS of
flows present in the network. Measurement based admission control (MBAC) and parameter basedadmission control (PBAC) are two commonly used approaches. This paper presents the implementationand analysis of various measurement-based admission control schemes developed within a Java-based
prototype of our policy-based framework. The evaluation is made with real traffic flows on a Linux-based experimental testbed where the current prototype is deployed. Our results show that unlike with classic MBAC or PBAC only schemes, a hybrid approach that combines both methods can simultaneously result in improved admission control and network utilization efficiency
Resumo:
This paper investigates a dynamic buffer man-agement scheme for QoS control of multimedia services in be-yond 3G wireless systems. The scheme is studied in the context of the state-of-the-art 3.5G system i.e. the High Speed Downlink Packet Access (HSDPA) which enhances 3G UMTS to support high-speed packet switched services. Unlike earlier systems, UMTS-evolved systems from HSDPA and beyond incorporate mechanisms such as packet scheduling and HARQ in the base station necessitating data buffering at the air interface. This introduces a potential bottleneck to end-to-end communication. Hence, buffer management at the air interface is crucial for end-to-end QoS support of multimedia services with multi-plexed parallel diverse flows such as video and data in the same end-user session. The dynamic buffer management scheme for HSDPA multimedia sessions with aggregated real-time and non real-time flows is investigated via extensive HSDPA simulations. The impact of the scheme on end-to-end traffic performance is evaluated with an example multimedia session comprising a real-time streaming flow concurrent with TCP-based non real-time flow. Results demonstrate that the scheme can guar-antee the end-to-end QoS of the real-time streaming flow, whilst simultaneously protecting the non real-time flow from starva-tion resulting in improved end-to-end throughput performance
Resumo:
Policy-based management is considered an effective approach to address the challenges of resource management in large complex networks. Within the IU-ATC QoS Frameworks project, a policy-based network management framework, CNQF (Converged Networks QoS Framework) is being developed aimed at providing context-aware, end-to-end QoS control and resource management in converged next generation networks. CNQF is designed to provide homogeneous, transparent QoS control over heterogeneous access technologies by means of distributed functional entities that co-ordinate the resources of the transport network through policy-driven decisions. In this paper, we present a measurement-based evaluation of policy-driven QoS management based on CNQF architecture, with real traffic flows on an experimental testbed. A Java based implementation of the CNQF Resource Management Subsystem is deployed on the testbed and results of the experiments validate the framework operation for policy-based QoS management of real traffic flows.
Resumo:
This paper presents and investigates a dynamic
buffer management scheme for QoS control of multimedia
services in a 3.5G wireless system i.e. the High Speed Downlink
Packet Access (HSDPA). HSDPA was introduced to enhance
UMTS for high-speed packet switched services. With HSDPA,
packet scheduling and HARQ mechanisms in the base station
require data buffering at the air interface thus introducing a
potential bottleneck to end-to-end communication. Hence, for
multimedia services with multiplexed parallel diverse flows
such as video and data in the same end-user session, buffer
management schemes in the base station are essential to support
end-to-end QoS provision. We propose a dynamic buffer management
scheme for HSDPA multimedia sessions with aggregated real-time and non real-time flows in the paper. The end-to-end performance impact of the scheme is evaluated with an example multimedia session comprising a real-time streaming
flow concurrent with TCP-based non real-time flow via extensive HSDPA simulations. Results demonstrate that the scheme can guarantee the end-to-end QoS of the real-time streaming flow, whilst simultaneously protecting non real-time flow from starvation resulting in improved end-to-end throughput performance
Resumo:
We examine the impact of transmit antenna selection with receive generalized selection combining (TAS/GSC) for cognitive decode-and-forward (DF) relaying in Nakagami-m fading channels. We select a single transmit antenna at the secondary transmitter which maximizes the receive signal-to-noise ratio (SNR) and combine a subset of receive antennas with the largest SNRs at the secondary receiver. In an effort to assess the performance, we first derive the probability density function and cumulative distribution function of the end-to-end SNR using the moment generating function. We then derive new exact closed-form expression for the ergodic capacity. More importantly, by deriving the asymptotic expression for the high SNR approximation of the ergodic capacity, we gather deep insights into the high SNR slope and the power offset. Our results show that the high SNR slope is 1/2 under the proportional interference power constraint. Under the fixed interference power constraint, the high SNR slope is zero.
Resumo:
Physical transceivers have hardware impairments that create distortions which degrade the performance of communication systems. The vast majority of technical contributions in the area of relaying neglect hardware impairments and, thus, assume ideal hardware. Such approximations make sense in low-rate systems, but can lead to very misleading results when analyzing future high-rate systems. This paper quantifies the impact of hardware impairments on dual-hop relaying, for both amplify-and-forward and decode-and-forward protocols. The outage probability (OP) in these practical scenarios is a function of the effective end-to-end signal-to-noise-and-distortion ratio (SNDR). This paper derives new closed-form expressions for the exact and asymptotic OPs, accounting for hardware impairments at the source, relay, and destination. A similar analysis for the ergodic capacity is also pursued, resulting in new upper bounds. We assume that both hops are subject to independent but non-identically distributed Nakagami-m fading. This paper validates that the performance loss is small at low rates, but otherwise can be very substantial. In particular, it is proved that for high signal-to-noise ratio (SNR), the end-to-end SNDR converges to a deterministic constant, coined the SNDR ceiling, which is inversely proportional to the level of impairments. This stands in contrast to the ideal hardware case in which the end-to-end SNDR grows without bound in the high-SNR regime. Finally, we provide fundamental design guidelines for selecting hardware that satisfies the requirements of a practical relaying system.