31 resultados para Server
em QUB Research Portal - Research Directory and Institutional Repository for Queen's University Belfast
Resumo:
Repeat proteins have become increasingly important due to their capability to bind to almost any proteins and the potential as alternative therapy to monoclonal antibodies. In the past decade repeat proteins have been designed to mediate specific protein-protein interactions. The tetratricopeptide and ankyrin repeat proteins are two classes of helical repeat proteins that form different binding pockets to accommodate various partners. It is important to understand the factors that define folding and stability of repeat proteins in order to prioritize the most stable designed repeat proteins to further explore their potential binding affinities. Here we developed distance-dependant statistical potentials using two classes of alpha-helical repeat proteins, tetratricopeptide and ankyrin repeat proteins respectively, and evaluated their efficiency in predicting the stability of repeat proteins. We demonstrated that the repeat-specific statistical potentials based on these two classes of repeat proteins showed paramount accuracy compared with non-specific statistical potentials in: 1) discriminate correct vs. incorrect models 2) rank the stability of designed repeat proteins. In particular, the statistical scores correlate closely with the equilibrium unfolding free energies of repeat proteins and therefore would serve as a novel tool in quickly prioritizing the designed repeat proteins with high stability. StaRProtein web server was developed for predicting the stability of repeat proteins.
Resumo:
We present a rigorous methodology and new metrics for fair comparison of server and microserver platforms. Deploying our methodology and metrics, we compare a microserver with ARM cores against two servers with ×86 cores running the same real-time financial analytics workload. We define workload-specific but platform-independent performance metrics for platform comparison, targeting both datacenter operators and end users. Our methodology establishes that a server based on the Xeon Phi co-processor delivers the highest performance and energy efficiency. However, by scaling out energy-efficient microservers, we achieve competitive or better energy efficiency than a power-equivalent server with two Sandy Bridge sockets, despite the microserver's slower cores. Using a new iso-QoS metric, we find that the ARM microserver scales enough to meet market throughput demand, that is, a 100% QoS in terms of timely option pricing, with as little as 55% of the energy consumed by the Sandy Bridge server.
Resumo:
A FORTRAN 90 program is presented which calculates the total cross sections, and the electron energy spectra of the singly and doubly differential cross sections for the single target ionization of neutral atoms ranging from hydrogen up to and including argon. The code is applicable for the case of both high and low Z projectile impact in fast ion-atom collisions. The theoretical models provided for the program user are based on two quantum mechanical approximations which have proved to be very successful in the study of ionization in ion-atom collisions. These are the continuum-distorted-wave (CDW) and continuum-distorted-wave eikonal-initial-state (CDW-EIS) approximations. The codes presented here extend previously published. codes for single ionization of. target hydrogen [Crothers and McCartney, Comput. Phys. Commun. 72 (1992) 288], target helium [Nesbitt, O'Rourke and Crothers, Comput. Phys. Commun. 114 (1998) 385] and target atoms ranging from lithium to neon [O'Rourke, McSherry and Crothers, Comput. Phys. Commun. 131 (2000) 129]. Cross sections for all of these target atoms may be obtained as limiting cases from the present code. Title of program: ARGON Catalogue identifier: ADSE Program summary URL: http://cpc.cs.qub.ac.uk/cpc/summaries/ADSE Program obtainable from: CPC Program Library Queen's University of Belfast, N. Ireland Licensing provisions: none Computer for which the program is designed and others on which it is operable: Computers: Four by 200 MHz Pro Pentium Linux server, DEC Alpha 21164; Four by 400 MHz Pentium 2 Xeon 450 Linux server, IBM SP2 and SUN Enterprise 3500 Installations: Queen's University, Belfast Operating systems under which the program has been tested: Red-hat Linux 5.2, Digital UNIX Version 4.0d, AIX, Solaris SunOS 5.7 Compilers: PGI workstations, DEC CAMPUS Programming language used: FORTRAN 90 with MPI directives No. of bits in a word: 64, except on Linux servers 32 Number of processors used: any number Has the code been vectorized or parallelized? Parallelized using MPI No. of bytes in distributed program, including test data, etc.: 32 189 Distribution format: tar gzip file Keywords: Single ionization, cross sections, continuum-distorted-wave model, continuum- distorted-wave eikonal-initial-state model, target atoms, wave treatment Nature of physical problem: The code calculates total, and differential cross sections for the single ionization of target atoms ranging from hydrogen up to and including argon by both light and heavy ion impact. Method of solution: ARGON allows the user to calculate the cross sections using either the CDW or CDW-EIS [J. Phys. B 16 (1983) 3229] models within the wave treatment. Restrictions on the complexity of the program: Both the CDW and CDW-EIS models are two-state perturbative approximations. Typical running time: Times vary according to input data and number of processors. For one processor the test input data for double differential cross sections (40 points) took less than one second, whereas the test input for total cross sections (20 points) took 32 minutes. Unusual features of the program: none (C) 2003 Elsevier B.V All rights reserved.
Resumo:
Query processing over the Internet involving autonomous data sources is a major task in data integration. It requires the estimated costs of possible queries in order to select the best one that has the minimum cost. In this context, the cost of a query is affected by three factors: network congestion, server contention state, and complexity of the query. In this paper, we study the effects of both the network congestion and server contention state on the cost of a query. We refer to these two factors together as system contention states. We present a new approach to determining the system contention states by clustering the costs of a sample query. For each system contention state, we construct two cost formulas for unary and join queries respectively using the multiple regression process. When a new query is submitted, its system contention state is estimated first using either the time slides method or the statistical method. The cost of the query is then calculated using the corresponding cost formulas. The estimated cost of the query is further adjusted to improve its accuracy. Our experiments show that our methods can produce quite accurate cost estimates of the submitted queries to remote data sources over the Internet.
Resumo:
Haptic information originates from a different human sense (touch), therefore the quality of service (QoS) required to supporthaptic traffic is significantly different from that used to support conventional real-time traffic such as voice or video. Each type ofnetwork impairment has different (and severe) impacts on the user’s haptic experience. There has been no specific provision of QoSparameters for haptic interaction. Previous research into distributed haptic virtual environments (DHVEs) have concentrated onsynchronization of positions (haptic device or virtual objects), and are based on client-server architectures.We present a new peerto-peer DHVE architecture that further extends this to enable force interactions between two users whereby force data are sent tothe remote peer in addition to positional information. The work presented involves both simulation and practical experimentationwhere multimodal data is transmitted over a QoS-enabled IP network. Both forms of experiment produce consistent results whichshow that the use of specific QoS classes for haptic traffic will reduce network delay and jitter, leading to improvements in users’haptic experiences with these types of applications.
Resumo:
A research project in Web-enabled collaborative design and manufacture has been conducted. The major tasks of the project include the development of a Web-enabled environment for collaboration, online collaborative CAD/CAM, remote execution of large size programs (RELSP), and distributed product design. The tasks and Web/Internet techniques involved are presented first, followed by detail description of two approaches developed for implementation of the research: (1) a client-server approach for RELSP, where the following Internet techniques are utilized: CORBA, Microsoft’s Internet information server, Tomcat server, JDBC and ODBC; (2) Web-Services supported collaborative CAD which enables geographically dispersed designers jointly conduct a design task in the way of speaking and seeing each other and instantaneously modifying the CAD drawing online.
Resumo:
We propose simple models to predict the performance degradation of disk requests due to storage device contention in consolidated virtualized environments. Model parameters can be deduced from measurements obtained inside Virtual Machines (VMs) from a system where a single VM accesses a remote storage server. The parameterized model can then be used to predict the effect of storage contention when multiple VMs are consolidated on the same server. We first propose a trace-driven approach that evaluates a queueing network with fair share scheduling using simulation. The model parameters consider Virtual Machine Monitor level disk access optimizations and rely on a calibration technique. We further present a measurement-based approach that allows a distinct characterization of read/write performance attributes. In particular, we define simple linear prediction models for I/O request mean response times, throughputs and read/write mixes, as well as a simulation model for predicting response time distributions. We found our models to be effective in predicting such quantities across a range of synthetic and emulated application workloads.
Resumo:
Background: Digital pathology provides a digital environment for the management and interpretation of pathological images and associated data. It is becoming increasing popular to use modern computer based tools and applications in pathological education, tissue based research and clinical diagnosis. Uptake of this new technology is stymied by its single user orientation and its prerequisite and cumbersome combination of mouse and keyboard for navigation and annotation.
Methodology: In this study we developed SurfaceSlide, a dedicated viewing platform which enables the navigation and annotation of gigapixel digitised pathological images using fingertip touch. SurfaceSlide was developed using the Microsoft Surface, a 30 inch multitouch tabletop computing platform. SurfaceSlide users can perform direct panning and zooming operations on digitised slide images. These images are downloaded onto the Microsoft Surface platform from a remote server on-demand. Users can also draw annotations and key in texts using an on-screen virtual keyboard. We also developed a smart caching protocol which caches the surrounding regions of a field of view in multi-resolutions thus providing a smooth and vivid user experience and reducing the delay for image downloading from the internet. We compared the usability of SurfaceSlide against Aperio ImageScope and PathXL online viewer.
Conclusion: SurfaceSlide is intuitive, fast and easy to use. SurfaceSlide represents the most direct, effective and intimate human–digital slide interaction experience. It is expected that SurfaceSlide will significantly enhance digital pathology tools and applications in education and clinical practice.
Resumo:
With the rapid expansion of the internet and the increasing demand on Web servers, many techniques were developed to overcome the servers' hardware performance limitation. Mirrored Web Servers is one of the techniques used where a number of servers carrying the same "mirrored" set of services are deployed. Client access requests are then distributed over the set of mirrored servers to even up the load. In this paper we present a generic reference software architecture for load balancing over mirrored web servers. The architecture was designed adopting the latest NaSr architectural style [1] and described using the ADLARS [2] architecture description language. With minimal effort, different tailored product architectures can be generated from the reference architecture to serve different network protocols and server operating systems. An example product system is described and a sample Java implementation is presented.