843 resultados para palveluiden laa-tuvaatimukset (QoS)


Relevância:

10.00% 10.00%

Publicador:

Resumo:

Multicasting is an efficient mechanism for one to many data dissemination. Unfortunately, IP Multicasting is not widely available to end-users today, but Application Layer Multicast (ALM), such as Content Addressable Network, helps to overcome this limitation. Our OM-QoS framework offers Quality of Service support for ALMs. We evaluated OM-QoS applied to CAN and show that we can guarantee that all multicast paths support certain QoS requirements.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

OBJECTIVES To report a 10-year single center experience with Amplatzer devices for left atrial appendage (LAA) occlusion. BACKGROUND Intermediate-term outcome data following LAA occlusion are scarce. METHODS Short- and intermediate-term outcomes of patients who underwent LAA occlusion were assessed. All procedures were performed under local aesthesia without transesophageal echocardiography. Patients were discharged on acetylsalicylic acid and clopidogrel for 1-6 months. RESULTS LAA occlusion was attempted in 152 patients (105 males, age 72 ± 10 years, CHA2 DS2 -Vasc-score 3.4 ± 1.7, HAS-BLED-score 2.4 ± 1.2). Nondedicated devices were used in 32 patients (21%, ND group) and dedicated Amplatzer Cardiac Plugs were used in 120 patients (79%, ACP group). A patent foramen ovale or atrial septal defect was used for left atrial access and closed at the end of LAA occlusion in 40 patients. The short-term safety endpoints (procedural complications, bleeds) occurred in 15 (9.8%) and the efficacy endpoints (death, stroke, systemic embolization) in 0 patients. Device embolization occurred more frequently in the ND as compared to the ACP group (5 patients or 12% vs. 2 patients or 2%). Mean intermediate-term follow up of the study population was 32 months (range 1-120). Late deaths occurred in 15 patients (5 cardiovascular, 7 noncardiac, 3 unexplained). Neurologic events occurred in 2, peripheral embolism in 1, and major bleeding in 4 patients. The composite efficacy and safety endpoint occurred in 7% and 12% of patients. CONCLUSION LAA closure may be a good alternative to oral anticoagulation. This hypothesis needs to be tested in a randomized clinical trial to ensure that all potential biases of this observational study are accounted for.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

OBJECTIVES: To assess feasibility and outcomes of left atrial appendage (LAA) closure when using a patent foramen ovale (PFO) for left atrial access. Background: Because of the fear of entering the left atrium too high, using a PFO for left atrial access during LAA occlusion (LAAO) is generally discouraged. We report our single-center experience using a concomitant PFO for LAAO, thereby avoiding transseptal puncture. METHODS: LAAO was performed with local anesthesia and fluoroscopic guidance only (no echocardiography). The Amplatzer Cardiac Plug (ACP) was used in all patients. After LAAO, the PFO was closed at the same sitting, using an Amplatzer occluder through the ACP delivery sheath. Patients were discharged the same or the following day on dual antiplatelet therapy for 1-6 months, at which time a follow-up transesophageal echocardiogram (TEE) was performed. RESULTS: In 49 (96%) of 51 patients (35 males, age 70.9 ± 11.9 years), LAAO was successful using the PFO for left atrial access. In one patient, a long tunnel PFO precluded LAAO, which was performed via a more caudal transseptal puncture. In a second patient, a previously inserted ASD occluder precluded LAAO, which was abandoned because of pericardial bleeding. PFO closure was successful in all patients. Follow-up TEE was performed in 43 patients 138 ± 34 days after the procedure. It showed proper sitting of both devices in all patients. CONCLUSIONS: Using a PFO for LAAO had a high success rate and could be the default access in all patients with a PFO, potentially reducing procedural complications arising from transseptal puncture.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper evaluates the performance of the most popular power saving mechanisms defined in the IEEE 802.11 standard, namely the Power Save Mode (Legacy-PSM) and the Unscheduled Automatic Power Save Delivery (U-APSD). The assessment comprises a detailed study concerning energy efficiency and capability to guarantee the required Quality of Service (QoS) for a certain application. The results, obtained in the OMNeT++ simulator, showed that U-APSD is more energy efficient than Legacy-PSM without compromising the end-to- end delay. Both U-APSD and Legacy-PSM revealed capability to guarantee the application QoS requirements in all the studied scenarios. However, unlike U-APSD, when Legacy-PSM is used in the presence of QoS demanding applications, all the stations connected to the network through the same access point will consume noticeable additional energy.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Opportunistic routing (OR) takes advantage of the broadcast nature and spatial diversity of wireless transmission to improve the performance of wireless ad-hoc networks. Instead of using a predetermined path to send packets, OR postpones the choice of the next-hop to the receiver side, and lets the multiple receivers of a packet to coordinate and decide which one will be the forwarder. Existing OR protocols choose the next-hop forwarder based on a predefined candidate list, which is calculated using single network metrics. In this paper, we propose TLG - Topology and Link quality-aware Geographical opportunistic routing protocol. TLG uses multiple network metrics such as network topology, link quality, and geographic location to implement the coordination mechanism of OR. We compare TLG with well-known existing solutions and simulation results show that TLG outperforms others in terms of both QoS and QoE metrics.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We solve two inverse spectral problems for star graphs of Stieltjes strings with Dirichlet and Neumann boundary conditions, respectively, at a selected vertex called root. The root is either the central vertex or, in the more challenging problem, a pendant vertex of the star graph. At all other pendant vertices Dirichlet conditions are imposed; at the central vertex, at which a mass may be placed, continuity and Kirchhoff conditions are assumed. We derive conditions on two sets of real numbers to be the spectra of the above Dirichlet and Neumann problems. Our solution for the inverse problems is constructive: we establish algorithms to recover the mass distribution on the star graph (i.e. the point masses and lengths of subintervals between them) from these two spectra and from the lengths of the separate strings. If the root is a pendant vertex, the two spectra uniquely determine the parameters on the main string (i.e. the string incident to the root) if the length of the main string is known. The mass distribution on the other edges need not be unique; the reason for this is the non-uniqueness caused by the non-strict interlacing of the given data in the case when the root is the central vertex. Finally, we relate of our results to tree-patterned matrix inverse problems.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

BACKGROUND Rapid pulmonary vein (PV) activity has been shown to maintain paroxysmal atrial fibrillation (AF). We evaluated in persistent AF the cycle length (CL) gradient between PVs and the left atrium (LA) in an attempt to identify the subset of patients where PVs play an important role. METHODS AND RESULTS Ninety-seven consecutive patients undergoing first ablation for persistent AF were studied. For each PV, the CL of the fastest activation was assessed over 1 minute (PVfast) using Lasso recordings. The PV to LA CL gradient was quantified by the ratio of PVfast to LA appendage (LAA) AF CL. Stepwise ablation terminated AF in 73 patients (75%). In the AF termination group, the PVfast CL was much shorter than the LAA CL resulting in lower PVfast/LAA ratios compared with the nontermination group (71±10% versus 92±7%; P<0.001). Within the termination group, PVfast/LAA ratios were notably lower if AF terminated after PV isolation or limited adjunctive substrate ablation compared with patients who required moderate or extensive ablation (63±6% versus 75±8%; P<0.001). PVfast/LAA ratio <69% predicted AF termination after PV isolation or limited substrate ablation with 74% positive predictive value and 95% negative predictive value. After a mean follow-up of 29±17 months, freedom from arrhythmia recurrence off-antiarrhythmic drugs was achieved in most patients with PVfast/LAA ratios <69% as opposed to the remaining population (80% versus 43%; P<0.001). CONCLUSIONS The PV to LA CL gradient may identify the subset of patients in whom persistent AF is likely to terminate after PV isolation or limited substrate ablation and better long-term outcomes are achieved.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A 76-year-old male patient was admitted for percutaneous left atrial appendage (LAA) closure because of chronic atrial fibrillation and a history of gastrointestinal bleeding under oral anticoagulation. The procedure was complicated by perforation of the LAA with the lobe of the closure device being placed in the pericardial space. Keeping access to the pericardial space with the delivery sheath, the LAA closure device was replaced by an atrial septal defect closure device to seal the perforation. Then the initial LAA closure device was reimplanted in a correct position. Needle pericardiocentesis was required but the subsequent course was uneventful.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Location prediction has attracted a significant amount of research effort. Being able to predict users’ movement benefits a wide range of communication systems, including location-based service/applications, mobile access control, mobile QoS provision, and resource management for mobile computation and storage management. In this demo, we present MOBaaS, which is a cloudified Mobility and Bandwidth prediction services that can be instantiated, deployed, and disposed on-demand. Mobility prediction of MOBaaS provides location predictions of a single/group user equipments (UEs) in a future moment. This information can be used for self-adaptation procedures and optimal network function configuration during run-time operations. We demonstrate an example of real-time mobility prediction service deployment running on OpenStack platform, and the potential benefits it bring to other invoking services.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Very few studies have described MUP-1 concentrations and measured prevalence of Laboratory Animal Allergy (LAA) at such a diverse institution as the private medical school (MS) that is the focus of this study. Air sampling was performed in three dissimilar animal research facilities at MS and quantitated using a commercially available ELISA. Descriptive data was obtained from an anonymous laboratory animal allergy survey given to both animal facility employees and the researchers who utilize these facilities alike. Logistic regression analysis was then implemented to investigate specific factors that may be predictive of developing LAA as well as factors influencing the reporting of LAA symptoms to the occupational health program. Concentrations of MUP-1 detected ranged from below detectable levels (BDL) to a peak of 22.64 ng/m3 . Overall, 68 employees with symptoms claimed they improved while away from work and only 25 employees reported their symptoms to occupational health. Being Vietnamese, a smoker, not wearing a mask, and working in any facility longer than one year were all significant predictors of having LAA symptoms. This study suggests a LAA monitoring system that relies on self-reporting can be inadequate in estimating LAA problems. In addition, efforts need to be made to target training and educational materials for non-native English speaking employees to overcome language and cultural barriers and address their specific needs. ^

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The aim of this work is to solve a question raised for average sampling in shift-invariant spaces by using the well-known matrix pencil theory. In many common situations in sampling theory, the available data are samples of some convolution operator acting on the function itself: this leads to the problem of average sampling, also known as generalized sampling. In this paper we deal with the existence of a sampling formula involving these samples and having reconstruction functions with compact support. Thus, low computational complexity is involved and truncation errors are avoided. In practice, it is accomplished by means of a FIR filter bank. An answer is given in the light of the generalized sampling theory by using the oversampling technique: more samples than strictly necessary are used. The original problem reduces to finding a polynomial left inverse of a polynomial matrix intimately related to the sampling problem which, for a suitable choice of the sampling period, becomes a matrix pencil. This matrix pencil approach allows us to obtain a practical method for computing the compactly supported reconstruction functions for the important case where the oversampling rate is minimum. Moreover, the optimality of the obtained solution is established.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Triple-Play (3P) and Quadruple-Play (4P) services are being widely offered by telecommunication services providers. Such services must be able to offer equal or higher quality levels than those obtained with traditional systems, especially for the most demanding services such as broadcast IPTV. This paper presents a matrix-based model, defined in terms of service components, user perceptions, agent capabilities, performance indicators and evaluation functions, which allows to estimate the overall quality of a set of convergent services, as perceived by the users, from a set of performance and/or Quality of Service (QoS) parameters of the convergent IP transport network

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The number of online real-time streaming services deployed over network topologies like P2P or centralized ones has remarkably increased in the recent years. This has revealed the lack of networks that are well prepared to respond to this kind of traffic. A hybrid distribution network can be an efficient solution for real-time streaming services. This paper contains the experimental results of streaming distribution in a hybrid architecture that consist of mixed connections among P2P and Cloud nodes that can interoperate together. We have chosen to represent the P2P nodes as Planet Lab machines over the world and the cloud nodes using a Cloud provider's network. First we present an experimental validation of the Cloud infrastructure's ability to distribute streaming sessions with respect to some key streaming QoS parameters: jitter, throughput and packet losses. Next we show the results obtained from different test scenarios, when a hybrid distribution network is used. The scenarios measure the improvement of the multimedia QoS parameters, when nodes in the streaming distribution network (located in different continents) are gradually moved into the Cloud provider infrastructure. The overall conclusion is that the QoS of a streaming service can be efficiently improved, unlike in traditional P2P systems and CDN, by deploying a hybrid streaming architecture. This enhancement can be obtained by strategic placing of certain distribution network nodes into the Cloud provider infrastructure, taking advantage of the reduced packet loss and low latency that exists among its datacenters.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Abstract The proliferation of wireless sensor networks and the variety of envisioned applications associated with them has motivated the development of distributed algorithms for collaborative processing over networked systems. One of the applications that has attracted the attention of the researchers is that of target localization where the nodes of the network try to estimate the position of an unknown target that lies within its coverage area. Particularly challenging is the problem of estimating the target’s position when we use received signal strength indicator (RSSI) due to the nonlinear relationship between the measured signal and the true position of the target. Many of the existing approaches suffer either from high computational complexity (e.g., particle filters) or lack of accuracy. Further, many of the proposed solutions are centralized which make their application to a sensor network questionable. Depending on the application at hand and, from a practical perspective it could be convenient to find a balance between localization accuracy and complexity. Into this direction we approach the maximum likelihood location estimation problem by solving a suboptimal (and more tractable) problem. One of the main advantages of the proposed scheme is that it allows for a decentralized implementation using distributed processing tools (e.g., consensus and convex optimization) and therefore, it is very suitable to be implemented in real sensor networks. If further accuracy is needed an additional refinement step could be performed around the found solution. Under the assumption of independent noise among the nodes such local search can be done in a fully distributed way using a distributed version of the Gauss-Newton method based on consensus. Regardless of the underlying application or function of the sensor network it is al¬ways necessary to have a mechanism for data reporting. While some approaches use a special kind of nodes (called sink nodes) for data harvesting and forwarding to the outside world, there are however some scenarios where such an approach is impractical or even impossible to deploy. Further, such sink nodes become a bottleneck in terms of traffic flow and power consumption. To overcome these issues instead of using sink nodes for data reporting one could use collaborative beamforming techniques to forward directly the generated data to a base station or gateway to the outside world. In a dis-tributed environment like a sensor network nodes cooperate in order to form a virtual antenna array that can exploit the benefits of multi-antenna communications. In col-laborative beamforming nodes synchronize their phases in order to add constructively at the receiver. Some of the inconveniences associated with collaborative beamforming techniques is that there is no control over the radiation pattern since it is treated as a random quantity. This may cause interference to other coexisting systems and fast bat-tery depletion at the nodes. Since energy-efficiency is a major design issue we consider the development of a distributed collaborative beamforming scheme that maximizes the network lifetime while meeting some quality of service (QoS) requirement at the re¬ceiver side. Using local information about battery status and channel conditions we find distributed algorithms that converge to the optimal centralized beamformer. While in the first part we consider only battery depletion due to communications beamforming, we extend the model to account for more realistic scenarios by the introduction of an additional random energy consumption. It is shown how the new problem generalizes the original one and under which conditions it is easily solvable. By formulating the problem under the energy-efficiency perspective the network’s lifetime is significantly improved. Resumen La proliferación de las redes inalámbricas de sensores junto con la gran variedad de posi¬bles aplicaciones relacionadas, han motivado el desarrollo de herramientas y algoritmos necesarios para el procesado cooperativo en sistemas distribuidos. Una de las aplicaciones que suscitado mayor interés entre la comunidad científica es la de localization, donde el conjunto de nodos de la red intenta estimar la posición de un blanco localizado dentro de su área de cobertura. El problema de la localization es especialmente desafiante cuando se usan niveles de energía de la seal recibida (RSSI por sus siglas en inglés) como medida para la localization. El principal inconveniente reside en el hecho que el nivel de señal recibida no sigue una relación lineal con la posición del blanco. Muchas de las soluciones actuales al problema de localization usando RSSI se basan en complejos esquemas centralizados como filtros de partículas, mientas que en otras se basan en esquemas mucho más simples pero con menor precisión. Además, en muchos casos las estrategias son centralizadas lo que resulta poco prácticos para su implementación en redes de sensores. Desde un punto de vista práctico y de implementation, es conveniente, para ciertos escenarios y aplicaciones, el desarrollo de alternativas que ofrezcan un compromiso entre complejidad y precisión. En esta línea, en lugar de abordar directamente el problema de la estimación de la posición del blanco bajo el criterio de máxima verosimilitud, proponemos usar una formulación subóptima del problema más manejable analíticamente y que ofrece la ventaja de permitir en¬contrar la solución al problema de localization de una forma totalmente distribuida, convirtiéndola así en una solución atractiva dentro del contexto de redes inalámbricas de sensores. Para ello, se usan herramientas de procesado distribuido como los algorit¬mos de consenso y de optimización convexa en sistemas distribuidos. Para aplicaciones donde se requiera de un mayor grado de precisión se propone una estrategia que con¬siste en la optimización local de la función de verosimilitud entorno a la estimación inicialmente obtenida. Esta optimización se puede realizar de forma descentralizada usando una versión basada en consenso del método de Gauss-Newton siempre y cuando asumamos independencia de los ruidos de medida en los diferentes nodos. Independientemente de la aplicación subyacente de la red de sensores, es necesario tener un mecanismo que permita recopilar los datos provenientes de la red de sensores. Una forma de hacerlo es mediante el uso de uno o varios nodos especiales, llamados nodos “sumidero”, (sink en inglés) que actúen como centros recolectores de información y que estarán equipados con hardware adicional que les permita la interacción con el exterior de la red. La principal desventaja de esta estrategia es que dichos nodos se convierten en cuellos de botella en cuanto a tráfico y capacidad de cálculo. Como alter¬nativa se pueden usar técnicas cooperativas de conformación de haz (beamforming en inglés) de manera que el conjunto de la red puede verse como un único sistema virtual de múltiples antenas y, por tanto, que exploten los beneficios que ofrecen las comu¬nicaciones con múltiples antenas. Para ello, los distintos nodos de la red sincronizan sus transmisiones de manera que se produce una interferencia constructiva en el recep¬tor. No obstante, las actuales técnicas se basan en resultados promedios y asintóticos, cuando el número de nodos es muy grande. Para una configuración específica se pierde el control sobre el diagrama de radiación causando posibles interferencias sobre sis¬temas coexistentes o gastando más potencia de la requerida. La eficiencia energética es una cuestión capital en las redes inalámbricas de sensores ya que los nodos están equipados con baterías. Es por tanto muy importante preservar la batería evitando cambios innecesarios y el consecuente aumento de costes. Bajo estas consideraciones, se propone un esquema de conformación de haz que maximice el tiempo de vida útil de la red, entendiendo como tal el máximo tiempo que la red puede estar operativa garantizando unos requisitos de calidad de servicio (QoS por sus siglas en inglés) que permitan una decodificación fiable de la señal recibida en la estación base. Se proponen además algoritmos distribuidos que convergen a la solución centralizada. Inicialmente se considera que la única causa de consumo energético se debe a las comunicaciones con la estación base. Este modelo de consumo energético es modificado para tener en cuenta otras formas de consumo de energía derivadas de procesos inherentes al funcionamiento de la red como la adquisición y procesado de datos, las comunicaciones locales entre nodos, etc. Dicho consumo adicional de energía se modela como una variable aleatoria en cada nodo. Se cambia por tanto, a un escenario probabilístico que generaliza el caso determinista y se proporcionan condiciones bajo las cuales el problema se puede resolver de forma eficiente. Se demuestra que el tiempo de vida de la red mejora de forma significativa usando el criterio propuesto de eficiencia energética.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Several activities in service oriented computing, such as automatic composition, monitoring, and adaptation, can benefit from knowing properties of a given service composition before executing them. Among these properties we will focus on those related to execution cost and resource usage, in a wide sense, as they can be linked to QoS characteristics. In order to attain more accuracy, we formulate execution costs / resource usage as functions on input data (or appropriate abstractions thereof) and show how these functions can be used to make better, more informed decisions when performing composition, adaptation, and proactive monitoring. We present an approach to, on one hand, synthesizing these functions in an automatic fashion from the definition of the different orchestrations taking part in a system and, on the other hand, to effectively using them to reduce the overall costs of non-trivial service-based systems featuring sensitivity to data and possibility of failure. We validate our approach by means of simulations of scenarios needing runtime selection of services and adaptation due to service failure. A number of rebinding strategies, including the use of cost functions, are compared.