27 resultados para Time-sharing computer systems.


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Traditionally, the Internet provides only a “best-effort” service, treating all packets going to the same destination equally. However, providing differentiated services for different users based on their quality requirements is increasingly becoming a demanding issue. For this, routers need to have the capability to distinguish and isolate traffic belonging to different flows. This ability to determine the flow each packet belongs to is called packet classification. Technology vendors are reluctant to support algorithmic solutions for classification due to their nondeterministic performance. Although content addressable memories (CAMs) are favoured by technology vendors due to their deterministic high-lookup rates, they suffer from the problems of high-power consumption and high-silicon cost. This paper provides a new algorithmic-architectural solution for packet classification that mixes CAMs with algorithms based on multilevel cutting of the classification space into smaller spaces. The provided solution utilizes the geometrical distribution of rules in the classification space. It provides the deterministic performance of CAMs, support for dynamic updates, and added flexibility for system designers.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Generation of hardware architectures directly from dataflow representations is increasingly being considered as research moves toward system level design methodologies. Creation of networks of IP cores to implement actor functionality is a common approach to the problem, but often the memory sub-systems produced using these techniques are inefficiently utilised. This paper explores some of the issues in terms of memory organisation and accesses when developing systems from these high level representations. Using a template matching design study, challenges such as modelling memory reuse and minimising buffer requirements are examined, yielding results with significantly less memory requirements and costly off-chip memory accesses.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Realising high performance image and signal processing
applications on modern FPGA presents a challenging implementation problem due to the large data frames streaming through these systems. Specifically, to meet the high bandwidth and data storage demands of these applications, complex hierarchical memory architectures must be manually specified
at the Register Transfer Level (RTL). Automated approaches which convert high-level operation descriptions, for instance in the form of C programs, to an FPGA architecture, are unable to automatically realise such architectures. This paper
presents a solution to this problem. It presents a compiler to automatically derive such memory architectures from a C program. By transforming the input C program to a unique dataflow modelling dialect, known as Valved Dataflow (VDF), a mapping and synthesis approach developed for this dialect can
be exploited to automatically create high performance image and video processing architectures. Memory intensive C kernels for Motion Estimation (CIF Frames at 30 fps), Matrix Multiplication (128x128 @ 500 iter/sec) and Sobel Edge Detection (720p @ 30 fps), which are unrealisable by current state-of-the-art C-based synthesis tools, are automatically derived from a C description of the algorithm.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

With the rapid expansion of the internet and the increasing demand on Web servers, many techniques were developed to overcome the servers' hardware performance limitation. Mirrored Web Servers is one of the techniques used where a number of servers carrying the same "mirrored" set of services are deployed. Client access requests are then distributed over the set of mirrored servers to even up the load. In this paper we present a generic reference software architecture for load balancing over mirrored web servers. The architecture was designed adopting the latest NaSr architectural style [1] and described using the ADLARS [2] architecture description language. With minimal effort, different tailored product architectures can be generated from the reference architecture to serve different network protocols and server operating systems. An example product system is described and a sample Java implementation is presented.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Background: One way to tackle health inequalities in resource-poor settings is to establish links between doctors and health professionals there and specialists elsewhere using web-based telemedicine. One such system run by the Swinfen Charitable Trust has been in existence for 13 years which is an unusually long time for such systems.

Objective: We wanted to gain some insights into whether and how this system might be improved.

Methods: We carried out a survey by questionnaire of referrers and specialists over a six months period.

Results: During the study period, a total of 111 cases were referred from 35 different practitioners, of whom 24% were not doctors. Survey replies were received concerning 67 cases, a response rate of 61 per cent. Eighty-seven per cent of the responding referrers found the telemedicine advice useful, and 78% were able to follow the advice provided. As a result of the advice received, the diagnosis was changed in 22% of all cases and confirmed in a further 18 per cent. Patient management was changed in 33 per cent. There was no substantial difference between doctors and non-doctors. During the study period, the 111 cases were responded to by 148 specialists, from whom 108 replies to the questionnaire were received, a response rate of 73 per cent. About half of the specialists (47%) felt that their advice had improved the management of the patients. There were 62 cases where it was possible to match up the opinions of the referrer and the consultants about the value of a specific teleconsultation. In 34 cases (55%) the referrers and specialists agreed about the value. However, in 28 cases (45%) they did not: specialists markedly underestimated the value of a consultation compared to referrers. Both referrers and specialist were extremely positive about the system which appears to be working well. Minor changes such as a clearer referral template and an improved web interface for specialists may improve it.

Relevância:

50.00% 50.00%

Publicador:

Resumo:

The hybrid test method is a relatively recently developed dynamic testing technique that uses numerical modelling combined with simultaneous physical testing. The concept of substructuring allows the critical or highly nonlinear part of the structure that is difficult to numerically model with accuracy to be physically tested whilst the remainder of the structure, that has a more predictable response, is numerically modelled. In this paper, a substructured soft-real time hybrid test is evaluated as an accurate means of performing seismic tests of complex structures. The structure analysed is a three-storey, two-by-one bay concentrically braced frame (CBF) steel structure subjected to seismic excitation. A ground storey braced frame substructure whose response is critical to the overall response of the structure is tested, whilst the remainder of the structure is numerically modelled. OpenSees is used for numerical modelling and OpenFresco is used for the communication between the test equipment and numerical model. A novel approach using OpenFresco to define the complex numerical substructure of an X-braced frame within a hybrid test is also presented. The results of the hybrid tests are compared to purely numerical models using OpenSees and a simulated test using a combination of OpenSees and OpenFresco. The comparative results indicate that the test method provides an accurate and cost effective procedure for performing
full scale seismic tests of complex structural systems.

Relevância:

50.00% 50.00%

Publicador:

Resumo:

Inter-component communication has always been of great importance in the design of software architectures and connectors have been considered as first-class entities in many approaches [1][2][3]. We present a novel architectural style that is derived from the well-established domain of computer networks. The style adopts the inter-component communication protocol in a novel way that allows large scale software reuse. It mainly targets real-time, distributed, concurrent, and heterogeneous systems.