965 resultados para mobilità, wireless, QoS, VoIP, reti eterogenee


Relevância:

20.00% 20.00%

Publicador:

Resumo:

A large area multi-finger configuration power SiGe HBT device(with an emitter area of about 880μm~2)was fabricated with 2μm double-mesa technology.The maximum DC current gain β is 214.The BV_(CEO) is up to 10V,and the BV_(CBO) is up to 16V with a collector doping concentration of 1×10~(17)cm~(-3) and collector thickness of 400nm.The device exhibits a maximum oscillation frequency f_(max) of 19.3GHz and a cut-off frequency f_T of 18.0GHz at a DC bias point of I_C=30mA and V_(CE)=3V.MSG(maximum stable gain)is 24.5dB,and U(Mason unilateral gain)is 26.6dB at 1GHz.Due to the novel distribution layout,no notable current gain fall-off or thermal effects are observed in the I-V characteristics at high collector current.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A multi-finger structure power SiGe HBT device (with an emitter area of about 166μm^2) is fabricated with very simple 2μm double-mesa technology. The DC current gain β is 144.25. The B-C junction breakdown voltage reaches 9V with a collector doping concentration of 1 × 10^17cm^-3 and a collector thickness of 400nm. Though our data are influenced by large additional RF probe pads, the device exhibits a maximum oscillation frequency fmax of 10.1GHz and a cut-off frequency fτ of 1.8GHz at a DC bias point of IC=10mA and VCE = 2.5V.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Web 服务技术作为面向服务计算范型的主要实现技术,有效提高了异构环境下分布式应 用的开发效率,降低了其开发成本。服务发现与选择作为Web 服务技术体系中的关键技术, 提高了软件复用程度,从而进一步促进了企业间应用集成和大规模资源共享。一方面随着服 务计算技术的发展,Web 服务的数目日益增多;另一方面,企业业务敏捷性需求日益提高, 如何在大量候选服务中为用户选择出合适的候选服务,保证服务有效复用,相容组合成为服 务计算领域面临的一个重大挑战。目前的Web 服务选择技术缺乏对非功能属性的有效支持, 服务选择的精度不佳。针对该问题,论文使用一种基于主动监控反馈的QoS 感知的服务选 择机制,在对服务质量状态可信监控、准确预测的基础之上设计了一种QoS 感知的服务选 择算法,从而有效改进了服务选择的精度。 论文首先研究了Web 服务质量(QoS)建模问题,设计了一种轻量级的服务元数据模 型,用于描述Web 服务的服务质量。在此基础上分别设计并实现了基于有序数据结构的QoS 约束匹配算法和综合考虑功能属性匹配程度、QoS 保障能力、过往信誉以及用户偏好的服务 排名算法。在提高服务查准率与查全率的基础上,简化了服务选择过程中用户的负担,有利 于服务选择过程自动化完成。 本文还设计了一种客户端监控反馈方案,通过对服务历史状态的可信监控和有效预测, 为服务选择提供更加准确的QoS 数据。论文分别使用一种基于AOP 的可信监控方案,基于 低通滤波器和基于自适应最小二乘法的两个预测算法,有效保证监控的实时,可信,非侵入 和QoS 预测的准确性。在减少用户参与、提高服务选择自动化的同时进一步提高了服务选 择的精度。 最后论文探讨了网驰服务选择系统OnceSC 的设计与实现,并将前面提到的研究成果引 入其中。通过实验对系统的功能属性和非功能属性进行评估,结果表明OnceSC 具有QoS 感知,服务查准率、查全率高,无需用户参与等特点。 关

Relevância:

20.00% 20.00%

Publicador:

Resumo:

IEEE Computer Society; IEEE Technical Committee on Simulation (TCSIM)

Relevância:

20.00% 20.00%

Publicador:

Resumo:

针对基于 Internet机器人遥操作中存在的问题 ,结合 Internet网络技术的最新发展 ,借助 IP Qo S技术的特点和优势 ,本文研究并设计了基于集成业务体系结构的网络机器人遥操作系统 .通过分析 IP Qo S技术和机器人遥操作技术相互结合的可行性与合理性 ,表明该系统能够克服目前在 Internet遥操作过程中存在的问题并可在未来支持 IP Qo S技术的 Internet中发挥作用 .本文提出了该系统的设计原型及实现方法 .

Relevância:

20.00% 20.00%

Publicador:

Resumo:

基于Internet的机器人遥操作系统提供了进行网络控制系统(NCS和多媒体通信等学科交叉领域研究的良好平台和契机。服务质量(QoS)的概念起源于多媒体和远程通信领域,大量的文献表明,将NCS和QOS结合起来的研究和应用还很少。本文分析、研究了基于Internet的机器人遥操作系统作为NCS, Internet上的多媒体应用和实时系统等多学科的交叉领域所具有的特点,对该系统同步和协调问题及具有QOS意识的网络机器人遥操作系统体系结构进行了深入、广泛的研究和探讨。就现有的基于Internet的机器人遥操作系统存在的缺乏对网络可用带宽的适应、缺乏多数据流协调等问题,提出了一种针对网络机器人遥操作系统的端到端QOS自适应体系结构AeQTA o在Internet QoS整体工程尚未完全启动的情况下,AeQTA的目的是将QOS的方法和策略尽量移植到端系统上,在端系统上提供QOS配置接口,实施QOS驱动的控制和管理策略,实现最大的网络效率、最可能好的应用性能和合理的业务流间资源分配的和谐统一。从时钟同步、速率控制、拥塞控制、多传感器信息同步和端到端的调度等几个方面剖析了基于Internet的机器人遥操作系统的协调和同步问题。针对NC S系统的同步容限的量化问题,提出了NCS多传感器反馈中的同步距离的概念和定义。然后,根据基于公式的、TCP-友好的速率控制的基本思路,结合使用应用需求QOS和网络QOS两种尺度调节的基于主媒体流的表象同步方法,将多传感器信息同步和速率控制统一起来,提出了一种速率控制方法TTFRC,提高了系统的实时性和TCP-友好性。为了给基于Internet的机器人遥操作系统研究提供一个真实的实验环境,为相关的策略和算法提供验证平台,我们建立了一个开放的、灵活的、可移植的、可裁减的且成本低的MOMR原型系统。目前,该原型系统已经为基于Internet的机器人遥操作系统深入的理论研究和实际经验积累做出了很大贡献。并且,在此基础上,由中国科学院沈阳自动化研究所和香港中文大学合作,己于2402年1月通过Internet实现了沈阳—香港—密西根三地的MONM远程协作。力求控制工程和计算机网络工程等多学科的结合是本论文工作的努力方向。

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The identification of subject-specific traits extracted from patterns of brain activity still represents an important challenge. The need to detect distinctive brain features, which is relevant for biometric and brain computer interface systems, has been also emphasized in monitoring the effect of clinical treatments and in evaluating the progression of brain disorders. Graph theory and network science tools have revealed fundamental mechanisms of functional brain organization in resting-state M/EEG analysis. Nevertheless, it is still not clearly understood how several methodological aspects may bias the topology of the reconstructed functional networks. In this context, the literature shows inconsistency in the chosen length of the selected epochs, impeding a meaningful comparison between results from different studies. In this study we propose an approach which aims to investigate the existence of a distinctive functional core (sub-network) using an unbiased reconstruction of network topology. Brain signals from a public and freely available EEG dataset were analyzed using a phase synchronization based measure, minimum spanning tree and k-core decomposition. The analysis was performed for each classical brain rhythm separately. Furthermore, we aim to provide a network approach insensitive to the effects that epoch length has on functional connectivity (FC) and network reconstruction. Two different measures, the phase lag index (PLI) and the Amplitude Envelope Correlation (AEC), were applied to EEG resting-state recordings for a group of eighteen healthy volunteers. Weighted clustering coefficient (CCw), weighted characteristic path length (Lw) and minimum spanning tree (MST) parameters were computed to evaluate the network topology. The analysis was performed on both scalp and source-space data. Results about distinctive functional core, show highest classification rates from k-core decomposition in gamma (EER=0.130, AUC=0.943) and high beta (EER=0.172, AUC=0.905) frequency bands. Results from scalp analysis concerning the influence of epoch length, show a decrease in both mean PLI and AEC values with an increase in epoch length, with a tendency to stabilize at a length of 12 seconds for PLI and 6 seconds for AEC. Moreover, CCw and Lw show very similar behaviour, with metrics based on AEC more reliable in terms of stability. In general, MST parameters stabilize at short epoch lengths, particularly for MSTs based on PLI (1-6 seconds versus 4-8 seconds for AEC). At the source-level the results were even more reliable, with stability already at 1 second duration for PLI-based MSTs. Our results confirm that EEG analysis may represent an effective tool to identify subject-specific characteristics that may be of great impact for several bioengineering applications. Regarding epoch length, the present work suggests that both PLI and AEC depend on epoch length and that this has an impact on the reconstructed network topology, particularly at the scalp-level. Source-level MST topology is less sensitive to differences in epoch length, therefore enabling the comparison of brain network topology between different studies.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Current research on Internet-based distributed systems emphasizes the scalability of overlay topologies for efficient search and retrieval of data items, as well as routing amongst peers. However, most existing approaches fail to address the transport of data across these logical networks in accordance with quality of service (QoS) constraints. Consequently, this paper investigates the use of scalable overlay topologies for routing real-time media streams between publishers and potentially many thousands of subscribers. Specifically, we analyze the costs of using k-ary n-cubes for QoS-constrained routing. Given a number of nodes in a distributed system, we calculate the optimal k-ary n-cube structure for minimizing the average distance between any pair of nodes. Using this structure, we describe a greedy algorithm that selects paths between nodes in accordance with the real-time delays along physical links. We show this method improves the routing latencies by as much as 67%, compared to approaches that do not consider physical link costs. We are in the process of developing a method for adaptive node placement in the overlay topology, based upon the locations of publishers, subscribers, physical link costs and per-subscriber QoS constraints. One such method for repositioning nodes in logical space is discussed, to improve the likelihood of meeting service requirements on data routed between publishers and subscribers. Future work will evaluate the benefits of such techniques more thoroughly.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

(This Technical Report revises TR-BUCS-2003-011) The Transmission Control Protocol (TCP) has been the protocol of choice for many Internet applications requiring reliable connections. The design of TCP has been challenged by the extension of connections over wireless links. In this paper, we investigate a Bayesian approach to infer at the source host the reason of a packet loss, whether congestion or wireless transmission error. Our approach is "mostly" end-to-end since it requires only one long-term average quantity (namely, long-term average packet loss probability over the wireless segment) that may be best obtained with help from the network (e.g. wireless access agent).Specifically, we use Maximum Likelihood Ratio tests to evaluate TCP as a classifier of the type of packet loss. We study the effectiveness of short-term classification of packet errors (congestion vs. wireless), given stationary prior error probabilities and distributions of packet delays conditioned on the type of packet loss (measured over a larger time scale). Using our Bayesian-based approach and extensive simulations, we demonstrate that congestion-induced losses and losses due to wireless transmission errors produce sufficiently different statistics upon which an efficient online error classifier can be built. We introduce a simple queueing model to underline the conditional delay distributions arising from different kinds of packet losses over a heterogeneous wired/wireless path. We show how Hidden Markov Models (HMMs) can be used by a TCP connection to infer efficiently conditional delay distributions. We demonstrate how estimation accuracy is influenced by different proportions of congestion versus wireless losses and penalties on incorrect classification.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We study the impact of heterogeneity of nodes, in terms of their energy, in wireless sensor networks that are hierarchically clustered. In these networks some of the nodes become cluster heads, aggregate the data of their cluster members and transmit it to the sink. We assume that a percentage of the population of sensor nodes is equipped with additional energy resources-this is a source of heterogeneity which may result from the initial setting or as the operation of the network evolves. We also assume that the sensors are randomly (uniformly) distributed and are not mobile, the coordinates of the sink and the dimensions of the sensor field are known. We show that the behavior of such sensor networks becomes very unstable once the first node dies, especially in the presence of node heterogeneity. Classical clustering protocols assume that all the nodes are equipped with the same amount of energy and as a result, they can not take full advantage of the presence of node heterogeneity. We propose SEP, a heterogeneous-aware protocol to prolong the time interval before the death of the first node (we refer to as stability period), which is crucial for many applications where the feedback from the sensor network must be reliable. SEP is based on weighted election probabilities of each node to become cluster head according to the remaining energy in each node. We show by simulation that SEP always prolongs the stability period compared to (and that the average throughput is greater than) the one obtained using current clustering protocols. We conclude by studying the sensitivity of our SEP protocol to heterogeneity parameters capturing energy imbalance in the network. We found that SEP yields longer stability region for higher values of extra energy brought by more powerful nodes.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Wireless sensor networks have recently emerged as enablers of important applications such as environmental, chemical and nuclear sensing systems. Such applications have sophisticated spatial-temporal semantics that set them aside from traditional wireless networks. For example, the computation of temperature averaged over the sensor field must take into account local densities. This is crucial since otherwise the estimated average temperature can be biased by over-sampling areas where a lot more sensors exist. Thus, we envision that a fundamental service that a wireless sensor network should provide is that of estimating local densities. In this paper, we propose a lightweight probabilistic density inference protocol, we call DIP, which allows each sensor node to implicitly estimate its neighborhood size without the explicit exchange of node identifiers as in existing density discovery schemes. The theoretical basis of DIP is a probabilistic analysis which gives the relationship between the number of sensor nodes contending in the neighborhood of a node and the level of contention measured by that node. Extensive simulations confirm the premise of DIP: it can provide statistically reliable and accurate estimates of local density at a very low energy cost and constant running time. We demonstrate how applications could be built on top of our DIP-based service by computing density-unbiased statistics from estimated local densities.