22 resultados para HPCC


Relevância:

20.00% 20.00%

Publicador:

Resumo:

在IBMJS21 Blade Center上进行2次HPCC测试,介绍HPCC的结果分析方法,并采用分层模型AHPCCHPCC的测试结果进行分析。其目的是通过在高性能机群上执行HPCC测试说明HPCC测试对机群系统的评价和诊断能力。实验发现,在之前的HPL测试结果一直不理想并且无法更进一步发现和解决问题的情况下,采用HPCC测试能够较好地评价系统和诊断系统问题。通过分层模型的评价,能够得到更多关于目标系统的性能参数和发现可能的性能瓶颈,为系统设计及构建积累有价值的经验。

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Continuous biometric authentication schemes (CBAS) are built around the biometrics supplied by user behavioural characteristics and continuously check the identity of the user throughout the session. The current literature for CBAS primarily focuses on the accuracy of the system in order to reduce false alarms. However, these attempts do not consider various issues that might affect practicality in real world applications and continuous authentication scenarios. One of the main issues is that the presented CBAS are based on several samples of training data either of both intruder and valid users or only the valid users' profile. This means that historical profiles for either the legitimate users or possible attackers should be available or collected before prediction time. However, in some cases it is impractical to gain the biometric data of the user in advance (before detection time). Another issue is the variability of the behaviour of the user between the registered profile obtained during enrollment, and the profile from the testing phase. The aim of this paper is to identify the limitations in current CBAS in order to make them more practical for real world applications. Also, the paper discusses a new application for CBAS not requiring any training data either from intruders or from valid users.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The term “cloud computing” has emerged as a major ICT trend and has been acknowledged by respected industry survey organizations as a key technology and market development theme for the industry and ICT users in 2010. However, one of the major challenges that faces the cloud computing concept and its global acceptance is how to secure and protect the data and processes that are the property of the user. The security of the cloud computing environment is a new research area requiring further development by both the academic and industrial research communities. Today, there are many diverse and uncoordinated efforts underway to address security issues in cloud computing and, especially, the identity management issues. This paper introduces an architecture for a new approach to necessary “mutual protection” in the cloud computing environment, based upon a concept of mutual trust and the specification of definable profiles in vector matrix form. The architecture aims to achieve better, more generic and flexible authentication, authorization and control, based on a concept of mutuality, within that cloud computing environment.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Popular wireless network standards, such as IEEE 802.11/15/16, are increasingly adopted in real-time control systems. However, they are not designed for real-time applications. Therefore, the performance of such wireless networks needs to be carefully evaluated before the systems are implemented and deployed. While efforts have been made to model general wireless networks with completely random traffic generation, there is a lack of theoretical investigations into the modelling of wireless networks with periodic real-time traffic. Considering the widely used IEEE 802.11 standard, with the focus on its distributed coordination function (DCF), for soft-real-time control applications, this paper develops an analytical Markov model to quantitatively evaluate the network quality-of-service (QoS) performance in periodic real-time traffic environments. Performance indices to be evaluated include throughput capacity, transmission delay and packet loss ratio, which are crucial for real-time QoS guarantee in real-time control applications. They are derived under the critical real-time traffic condition, which is formally defined in this paper to characterize the marginal satisfaction of real-time performance constraints.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

为了能够充分使用计算机资源,使软件运行能够尽可能地接近计算机峰值性能,研究人员 一直在努力。一个思路是为计算机开发优秀的编译器,并使用编译器相关技术对软件进行 性能优化;作为补充,另一个思路是开发能够共用的核心软件包,通过提高核心软件包中 程序的性能提高调用核心库的软件系能。 但科学计算中针对特定的计算机平台和特定的用户问题进行性能调优仍然是一个困难的问 题。速度和可移植性是数值计算软件开发中的一对矛盾。自适应性能优化技术正是为了 解 决数值计算软件中的可移植性问题和进行自动性能优化而提出的,科学家们希望数值计 算 软件能够动态地获悉计算环境的变化以及待求解问题的特征,根据需要改变自身以适应 这些变化以及多种复杂的问题情况,并且在多种求解方法中进行决策以选择最优的解决方 法来求解问题。 本文调研了使用自适应性能优化技术的几个著名软件包:~ATLAS,~SPARSITY,~OSKI以及数字 信号处理领域的快速傅立叶变换软件包FFTW,在此基础上,重点分析和比较了自适应性能 优化的关键技术,着重介绍了矩阵乘计算、MPI通信操作、快速傅立叶变换中涉及的经验搜 索、算法选择和自动代码生成等技术在这些软件包中的应用。 在深入调研的基础上,结合实际应用需求,本文还提出了自适应性能优化过程的新的评价 指标,试图权衡优化效果和优化时间开销,并在不同的实验平台上针对ATLAS的优化过程进 行的实验和过程评价,实验表明,综合优化效果和优化开销能够有效地发现ATLAS自适应优 化过程的特征。将其应用到实际开发和调优过程中,能够在不损失性能的前提下,节省优化时间。 论文最后对HPCC测试软件包在IBM刀片机群上进行了对比测试,发现了测试平台存在的性能 瓶颈,并消除了该瓶颈。表明HPCC软件包确实可以有效的发现被测平台存在的性能瓶颈问 题。

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The use of multicores is becoming widespread inthe field of embedded systems, many of which have real-time requirements. Hence, ensuring that real-time applications meet their timing constraints is a pre-requisite before deploying them on these systems. This necessitates the consideration of the impact of the contention due to shared lowlevel hardware resources like the front-side bus (FSB) on the Worst-CaseExecution Time (WCET) of the tasks. Towards this aim, this paper proposes a method to determine an upper bound on the number of bus requests that tasks executing on a core can generate in a given time interval. We show that our method yields tighter upper bounds in comparison with the state of-the-art. We then apply our method to compute the extra contention delay incurred by tasks, when they are co-scheduled on different cores and access the shared main memory, using a shared bus, access to which is granted using a round-robin arbitration (RR) protocol.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Knowing exactly where a mobile entity is and monitoring its trajectory in real-time has recently attracted a lot of interests from both academia and industrial communities, due to the large number of applications it enables, nevertheless, it is nowadays one of the most challenging problems from scientific and technological standpoints. In this work we propose a tracking system based on the fusion of position estimations provided by different sources, that are combined together to get a final estimation that aims at providing improved accuracy with respect to those generated by each system individually. In particular, exploiting the availability of a Wireless Sensor Network as an infrastructure, a mobile entity equipped with an inertial system first gets the position estimation using both a Kalman Filter and a fully distributed positioning algorithm (the Enhanced Steepest Descent, we recently proposed), then combines the results using the Simple Convex Combination algorithm. Simulation results clearly show good performance in terms of the final accuracy achieved. Finally, the proposed technique is validated against real data taken from an inertial sensor provided by THALES ITALIA.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Structured data represented in the form of graphs arises in several fields of the science and the growing amount of available data makes distributed graph mining techniques particularly relevant. In this paper, we present a distributed approach to the frequent subgraph mining problem to discover interesting patterns in molecular compounds. The problem is characterized by a highly irregular search tree, whereby no reliable workload prediction is available. We describe the three main aspects of the proposed distributed algorithm, namely a dynamic partitioning of the search space, a distribution process based on a peer-to-peer communication framework, and a novel receiver-initiated, load balancing algorithm. The effectiveness of the distributed method has been evaluated on the well-known National Cancer Institute’s HIV-screening dataset, where the approach attains close-to linear speedup in a network of workstations.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The establishment of the Housing and Property Directorate (HPD) and Claims Commission (HPCC) in Kosovo has reflected an increasing focus internationally on the post-conflict restitution of housing and property rights. In approximately three years of full-scale operation, the institutions have managed to make a property rights determination on almost all of the approximate 30,000 contested residential properties. As such, HPD and HPCC are being looked to by many in other post-conflict areas as an example of how to proceed. While the efficiency of the organizations is commendable, one of the key original goals - the return of displaced persons to their homes of origin - has to a large degree been left aside. The paper focuses on two distinct failures of the international community with respect to the functioning of HPD/HPCC and its possible effect on returns: a failure of coordination between HPD/HPCC and other organizations working on returns, and the isolation of residential property rights determinations from other aspects of building a property rights-respecting culture in Kosovo.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this paper, we present some practical experience on implementing an alert fusion mechanism from our project. After investigation on most of the existing alert fusion systems, we found the current body of work alternatively weighed down in the mire of insecure design or rarely deployed because of their complexity. As confirmed by our experimental analysis, unsuitable mechanisms could easily be submerged by an abundance of useless alerts. Even with the use of methods that achieve a high fusion rate and low false positives, attack is also possible. To find the solution, we carried out analysis on a series of alerts generated by well-known datasets as well as realistic alerts from the Australian Honey-Pot. One important finding is that one alert has more than an 85% chance of being fused in the following 5 alerts. Of particular importance is our design of a novel lightweight Cache-based Alert Fusion Scheme, called CAFS. CAFS has the capacity to not only reduce the quantity of useless alerts generated by IDS (Intrusion Detection System), but also enhance the accuracy of alerts, therefore greatly reducing the cost of fusion processing. We also present reasonable and practical specifications for the target-oriented fusion policy that provides a quality guarantee on alert fusion, and as a result seamlessly satisfies the process of successive correlation. Our experimental results showed that the CAFS easily attained the desired level of survivable, inescapable alert fusion design. Furthermore, as a lightweight scheme, CAFS can easily be deployed and excel in a large amount of alert fusions, which go towards improving the usability of system resources. To the best of our knowledge, our work is a novel exploration in addressing these problems from a survivable, inescapable and deployable point of view.