983 resultados para Trusted computing platform


Relevância:

90.00% 90.00%

Publicador:

Resumo:

Background: Digital pathology provides a digital environment for the management and interpretation of pathological images and associated data. It is becoming increasing popular to use modern computer based tools and applications in pathological education, tissue based research and clinical diagnosis. Uptake of this new technology is stymied by its single user orientation and its prerequisite and cumbersome combination of mouse and keyboard for navigation and annotation.

Methodology: In this study we developed SurfaceSlide, a dedicated viewing platform which enables the navigation and annotation of gigapixel digitised pathological images using fingertip touch. SurfaceSlide was developed using the Microsoft Surface, a 30 inch multitouch tabletop computing platform. SurfaceSlide users can perform direct panning and zooming operations on digitised slide images. These images are downloaded onto the Microsoft Surface platform from a remote server on-demand. Users can also draw annotations and key in texts using an on-screen virtual keyboard. We also developed a smart caching protocol which caches the surrounding regions of a field of view in multi-resolutions thus providing a smooth and vivid user experience and reducing the delay for image downloading from the internet. We compared the usability of SurfaceSlide against Aperio ImageScope and PathXL online viewer.

Conclusion: SurfaceSlide is intuitive, fast and easy to use. SurfaceSlide represents the most direct, effective and intimate human–digital slide interaction experience. It is expected that SurfaceSlide will significantly enhance digital pathology tools and applications in education and clinical practice.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Wireless sensor node platforms are very diversified and very constrained, particularly in power consumption. When choosing or sizing a platform for a given application, it is necessary to be able to evaluate in an early design stage the impact of those choices. Applied to the computing platform implemented on the sensor node, it requires a good understanding of the workload it must perform. Nevertheless, this workload is highly application-dependent. It depends on the data sampling frequency together with application-specific data processing and management. It is thus necessary to have a model that can represent the workload of applications with various needs and characteristics. In this paper, we propose a workload model for wireless sensor node computing platforms. This model is based on a synthetic application that models the different computational tasks that the computing platform will perform to process sensor data. It allows to model the workload of various different applications by tuning data sampling rate and processing. A case study is performed by modeling different applications and by showing how it can be used for workload characterization. © 2011 IEEE.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This paper describes an end-user model for a domestic pervasive computing platform formed by regular home objects. The platform does not rely on pre-planned infrastructure; instead, it exploits objects that are already available in the home and exposes their joint sensing, actuating and computing capabilities to home automation applications. We advocate an incremental process of the platform formation and introduce tangible, object-like artifacts for representing important platform functions. One of those artifacts, the application pill, is a tiny object with a minimal user interface, used to carry the application, as well as to start and stop its execution and provide hints about its operational status. We also emphasize streamlining the user's interaction with the platform. The user engages any UI-capable object of his choice to configure applications, while applications issue notifications and alerts exploiting whichever available objects can be used for that purpose. Finally, the paper briefly describes an actual implementation of the presented end-user model. © (2010) by International Academy, Research, and Industry Association (IARIA).

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The widespread adoption of cluster computing as a high performance computing platform has seen the growth of data intensive scientific, engineering and commercial applications such as digital libraries, climate modeling, computational chemistry, computational fluid dynamics and image repositories. However, I/O subsystem performance has not been keeping pace with processor and memory performance, and is fast becoming the dominant factor in overall system performance.  Thus, parallel I/O has become a necessity in the face of performance improvements in other areas of computing systems. This paper addresses the problem of parallel I/O scheduling on cluster computing systems in the presence of data replication.  We propose two new I/O scheduling algorithms and evaluate the relative performance of the proposed policies against two existing approaches.  Simulation results show that the proposed policies perform substantially better than the baseline policies.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Grid users and their jobs need credentials to access grid resources and services. It is important to minimize the exposure of credentials to adversaries. A practical solution is needed that works with existing software and is easy to deploy, administer, and maintain. Thus, credential management services are the wave of the future for virtual organizations such as Grid computing. This paper describes architecture of a scalable, secure and reliable on-line credential management service called SafeBox for InterGrid computing platform. SafeBox provides InterGrid users with secure mechanism for storing one or multiple credentials and access them based on need at anytime from anywhere.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Cloud computing is becoming popular as the next infrastructure of computing platform. However, with data and business applications outsourced to a third party, how to protect cloud data centers from numerous attacks has become a critical concern. In this paper, we propose a clusterized framework of cloud firewall, which characters performance and cost evaluation. To provide quantitative performance analysis of the cloud firewall, a novel M/Geo/1 analytical model is established. The model allows cloud defenders to extract key system measures such as request response time, and determine how many resources are needed to guarantee quality of service (QoS). Moreover, we give an insight into financial cost of the proposed cloud firewall. Finally, our analytical results are verified by simulation experiments.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

In this project, we propose the implementation of a 3D object recognition system which will be optimized to operate under demanding time constraints. The system must be robust so that objects can be recognized properly in poor light conditions and cluttered scenes with significant levels of occlusion. An important requirement must be met: the system must exhibit a reasonable performance running on a low power consumption mobile GPU computing platform (NVIDIA Jetson TK1) so that it can be integrated in mobile robotics systems, ambient intelligence or ambient assisted living applications. The acquisition system is based on the use of color and depth (RGB-D) data streams provided by low-cost 3D sensors like Microsoft Kinect or PrimeSense Carmine. The range of algorithms and applications to be implemented and integrated will be quite broad, ranging from the acquisition, outlier removal or filtering of the input data and the segmentation or characterization of regions of interest in the scene to the very object recognition and pose estimation. Furthermore, in order to validate the proposed system, we will create a 3D object dataset. It will be composed by a set of 3D models, reconstructed from common household objects, as well as a handful of test scenes in which those objects appear. The scenes will be characterized by different levels of occlusion, diverse distances from the elements to the sensor and variations on the pose of the target objects. The creation of this dataset implies the additional development of 3D data acquisition and 3D object reconstruction applications. The resulting system has many possible applications, ranging from mobile robot navigation and semantic scene labeling to human-computer interaction (HCI) systems based on visual information.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The increasing complexity of computer systems and communication networks induces tremendous requirements for trust and security. This special issue includes topics on trusted computing, risk and reputation management, network security and survivable computer systems/networks. These issues have evolved into an active and important area of research and development. The past decade has witnessed a proliferation of concurrency and computation systems for practice of highly trust, security and privacy, which has become a key subject in determining future research and development activities in many academic and industrial branches. This special issue aims to present and discuss advances of current research and development in all aspects of trusted computing and network security. In addition, this special issue provides snapshots of contemporary academia work in the field of network trusted computing. We prepared and organized this special issue to record state-of-the-art research, novel development and trends for future insight in this domain. In this special issue, 14 papers have been accepted for publication, which demonstrate novel and original work in this field. A detailed overview of the selected works is given below.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Public transportation is an environment with great potential for applying location-based services through mobile devices. The BusTracker study is looking at how real-time passenger information systems can provide a core platform to improve commuters’ experiences. These systems rely on mobile computing and GPS technology to provide accurate information on transport vehicle locations. BusTracker builds on this mobile computing platform and geospatial information. The pilot study is running on the open source BugLabs computing platform, using a GPS module for accurate location information.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Digital rights management allows information owners to control the use and dissemination of electronic documents via a machine-readable licence. Documents are distributed in a protected form such that they may only be used with trusted environments, and only in accordance with terms and conditions stated in the licence. Digital rights management has found uses in protecting copyrighted audio-visual productions, private personal information, and companies' trade secrets and intellectual property. This chapter describes a general model of digital rights management together with the technologies used to implement each component of a digital rights management system, and desribes how digital rights management can be applied to secure the distribution of electronic information in a variety of contexts.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Analyzing security protocols is an ongoing research in the last years. Different types of tools are developed to make the analysis process more precise, fast and easy. These tools consider security protocols as black boxes that can not easily be composed. It is difficult or impossible to do a low-level analysis or combine different tools with each other using these tools. This research uses Coloured Petri Nets (CPN) to analyze OSAP trusted computing protocol. The OSAP protocol is modeled in different levels and it is analyzed using state space method. The produced model can be combined with other trusted computing protocols in future works.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Usability in HCI (Human-Computer Interaction) is normally understood as the simplicity and clarity with which the interaction with a computer program or a web site is designed. Identity management systems need to provide adequate usability and should have a simple and intuitive interface. The system should not only be designed to satisfy service provider requirements but it has to consider user requirements, otherwise it will lead to inconvenience and poor usability for users when managing their identities. With poor usability and a poor user interface with regard to security, it is highly likely that the system will have poor security. The rapid growth in the number of online services leads to an increasing number of different digital identities each user needs to manage. As a result, many people feel overloaded with credentials, which in turn negatively impacts their ability to manage them securely. Passwords are perhaps the most common type of credential used today. To avoid the tedious task of remembering difficult passwords, users often behave less securely by using low entropy and weak passwords. Weak passwords and bad password habits represent security threats to online services. Some solutions have been developed to eliminate the need for users to create and manage passwords. A typical solution is based on generating one-time passwords, i.e. passwords for single session or transaction usage. Unfortunately, most of these solutions do not satisfy scalability and/or usability requirements, or they are simply insecure. In this thesis, the security and usability aspects of contemporary methods for authentication based on one-time passwords (OTP) are examined and analyzed. In addition, more scalable solutions that provide a good user experience while at the same time preserving strong security are proposed.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Many computationally intensive scientific applications involve repetitive floating point operations other than addition and multiplication which may present a significant performance bottleneck due to the relatively large latency or low throughput involved in executing such arithmetic primitives on commod- ity processors. A promising alternative is to execute such primitives on Field Programmable Gate Array (FPGA) hardware acting as an application-specific custom co-processor in a high performance reconfig- urable computing platform. The use of FPGAs can provide advantages such as fine-grain parallelism but issues relating to code development in a hardware description language and efficient data transfer to and from the FPGA chip can present significant application development challenges. In this paper, we discuss our practical experiences in developing a selection of floating point hardware designs to be implemented using FPGAs. Our designs include some basic mathemati cal library functions which can be implemented for user defined precisions suitable for novel applications requiring non-standard floating point represen- tation. We discuss the details of our designs along with results from performance and accuracy analysis tests.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Modern applications comprise multiple components, such as browser plug-ins, often of unknown provenance and quality. Statistics show that failure of such components accounts for a high percentage of software faults. Enabling isolation of such fine-grained components is therefore necessary to increase the robustness and resilience of security-critical and safety-critical computer systems. In this paper, we evaluate whether such fine-grained components can be sandboxed through the use of the hardware virtualization support available in modern Intel and AMD processors. We compare the performance and functionality of such an approach to two previous software based approaches. The results demonstrate that hardware isolation minimizes the difficulties encountered with software based approaches, while also reducing the size of the trusted computing base, thus increasing confidence in the solution's correctness. We also show that our relatively simple implementation has equivalent run-time performance, with overheads of less than 34%, does not require custom tool chains and provides enhanced functionality over software-only approaches, confirming that hardware virtualization technology is a viable mechanism for fine-grained component isolation.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Many software applications extend their functionality by dynamically loading executable components into their allocated address space. Such components, exemplified by browser plugins and other software add-ons, not only enable reusability, but also promote programming simplicity, as they reside in the same address space as their host application, supporting easy sharing of complex data structures and pointers. However, such components are also often of unknown provenance and quality and may be riddled with accidental bugs or, in some cases, deliberately malicious code. Statistics show that such component failures account for a high percentage of software crashes and vulnerabilities. Enabling isolation of such fine-grained components is therefore necessary to increase the stability, security and resilience of computer programs. This thesis addresses this issue by showing how host applications can create isolation domains for individual components, while preserving the benefits of a single address space, via a new architecture for software isolation called LibVM. Towards this end, we define a specification which outlines the functional requirements for LibVM, identify the conditions under which these functional requirements can be met, define an abstract Application Programming Interface (API) that encompasses the general problem of isolating shared libraries, thus separating policy from mechanism, and prove its practicality with two concrete implementations based on hardware virtualization and system call interpositioning, respectively. The results demonstrate that hardware isolation minimises the difficulties encountered with software based approaches, while also reducing the size of the trusted computing base, thus increasing confidence in the solution’s correctness. This thesis concludes that, not only is it feasible to create such isolation domains for individual components, but that it should also be a fundamental operating system supported abstraction, which would lead to more stable and secure applications.