139 resultados para Trusted computing platform


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Inter-dealer trading in US Treasury securities is almost equally divided between two electronic trading platforms that have only slight differences in terms of their relative liquidity and transparency. BrokerTec is more active in the trading of 2-, 5-, and 10-year T-notes while eSpeed has more active trading in the 30-year bond. Over the period studied, eSpeed provides a more pre-trade transparent platform than BrokerTec. We examine the contribution to ‘price discovery’ of activity in the two platforms using high frequency data. We find that price discovery does not derive equally from the two platforms and that the shares vary across term to maturity. This can be traced to differential trading activities and transparency of the two platforms.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We introduce a new parallel pattern derived from a specific application domain and show how it turns out to have application beyond its domain of origin. The pool evolution pattern models the parallel evolution of a population subject to mutations and evolving in such a way that a given fitness function is optimized. The pattern has been demonstrated to be suitable for capturing and modeling the parallel patterns underpinning various evolutionary algorithms, as well as other parallel patterns typical of symbolic computation. In this paper we introduce the pattern, we discuss its implementation on modern multi/many core architectures and finally present experimental results obtained with FastFlow and Erlang implementations to assess its feasibility and scalability.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The design cycle for complex special-purpose computing systems is extremely costly and time-consuming. It involves a multiparametric design space exploration for optimization, followed by design verification. Designers of special purpose VLSI implementations often need to explore parameters, such as optimal bitwidth and data representation, through time-consuming Monte Carlo simulations. A prominent example of this simulation-based exploration process is the design of decoders for error correcting systems, such as the Low-Density Parity-Check (LDPC) codes adopted by modern communication standards, which involves thousands of Monte Carlo runs for each design point. Currently, high-performance computing offers a wide set of acceleration options that range from multicore CPUs to Graphics Processing Units (GPUs) and Field Programmable Gate Arrays (FPGAs). The exploitation of diverse target architectures is typically associated with developing multiple code versions, often using distinct programming paradigms. In this context, we evaluate the concept of retargeting a single OpenCL program to multiple platforms, thereby significantly reducing design time. A single OpenCL-based parallel kernel is used without modifications or code tuning on multicore CPUs, GPUs, and FPGAs. We use SOpenCL (Silicon to OpenCL), a tool that automatically converts OpenCL kernels to RTL in order to introduce FPGAs as a potential platform to efficiently execute simulations coded in OpenCL. We use LDPC decoding simulations as a case study. Experimental results were obtained by testing a variety of regular and irregular LDPC codes that range from short/medium (e.g., 8,000 bit) to long length (e.g., 64,800 bit) DVB-S2 codes. We observe that, depending on the design parameters to be simulated, on the dimension and phase of the design, the GPU or FPGA may suit different purposes more conveniently, thus providing different acceleration factors over conventional multicore CPUs.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Discrimination of different species in various target scopes within a single sensing platform can provide many advantages such as simplicity, rapidness, and cost effectiveness. Here we design a three-input colorimetric logic gate based on the aggregation and anti-aggregation of gold nanoparticles (Au NPs) for the sensing of melamine, cysteine, and Hg2+. The concept takes advantages of the highly specific coordination and ligand replacement reactions between melamine, cysteine, Hg2+, and Au NPs. Different outputs are obtained with the combinational inputs in the logic gates, which can serve as a reference to discriminate different analytes within a single sensing platform. Furthermore, besides the intrinsic sensitivity and selectivity of Au NPs to melamine-like compounds, the “INH” gates of melamine/cysteine and melamine/Hg2+ in this logic system can be employed for sensitive and selective detections of cysteine and Hg2+, respectively.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Embedded memories account for a large fraction of the overall silicon area and power consumption in modern SoC(s). While embedded memories are typically realized with SRAM, alternative solutions, such as embedded dynamic memories (eDRAM), can provide higher density and/or reduced power consumption. One major challenge that impedes the widespread adoption of eDRAM is that they require frequent refreshes potentially reducing the availability of the memory in periods of high activity and also consuming significant amount of power due to such frequent refreshes. Reducing the refresh rate while on one hand can reduce the power overhead, if not performed in a timely manner, can cause some cells to lose their content potentially resulting in memory errors. In this paper, we consider extending the refresh period of gain-cell based dynamic memories beyond the worst-case point of failure, assuming that the resulting errors can be tolerated when the use-cases are in the domain of inherently error-resilient applications. For example, we observe that for various data mining applications, a large number of memory failures can be accepted with tolerable imprecision in output quality. In particular, our results indicate that by allowing as many as 177 errors in a 16 kB memory, the maximum loss in output quality is 11%. We use this failure limit to study the impact of relaxing reliability constraints on memory availability and retention power for different technologies.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Randomised trials are at the heart of evidence-based healthcare, but the methods and infrastructure for conducting these sometimes complex studies are largely evidence free. Trial Forge (www.trialforge.org) is an initiative that aims to increase the evidence base for trial decision making and, in doing so, to improve trial efficiency.

This paper summarises a one-day workshop held in Edinburgh on 10 July 2014 to discuss Trial Forge and how to advance this initiative. We first outline the problem of inefficiency in randomised trials and go on to describe Trial Forge. We present participants' views on the processes in the life of a randomised trial that should be covered by Trial Forge.

General support existed at the workshop for the Trial Forge approach to increase the evidence base for making randomised trial decisions and for improving trial efficiency. Agreed upon key processes included choosing the right research question; logistical planning for delivery, training of staff, recruitment, and retention; data management and dissemination; and close down. The process of linking to existing initiatives where possible was considered crucial. Trial Forge will not be a guideline or a checklist but a 'go to' website for research on randomised trials methods, with a linked programme of applied methodology research, coupled to an effective evidence-dissemination process. Moreover, it will support an informal network of interested trialists who meet virtually (online) and occasionally in person to build capacity and knowledge in the design and conduct of efficient randomised trials.

Some of the resources invested in randomised trials are wasted because of limited evidence upon which to base many aspects of design, conduct, analysis, and reporting of clinical trials. Trial Forge will help to address this lack of evidence.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Microneedles (MNs) are a minimally invasive drug delivery platform, designed to enhance transdermal drug delivery by breaching the stratum corneum. For the first time, this study describes the simultaneous delivery of a combination of three drugs using a dissolving polymeric MN system. In the present study, aspirin, lisinopril dihydrate, and atorvastatin calcium trihydrate were used as exemplar cardiovascular drugs and formulated into MN arrays using two biocompatible polymers, poly(vinylpyrrollidone) and poly(methylvinylether/maleic acid). Following fabrication, dissolution, mechanical testing, and determination of drug recovery from the MN arrays, in vitro drug delivery studies were undertaken, followed by HPLC analysis. All three drugs were successfully delivered in vitro across neonatal porcine skin, with similar permeation profiles achieved from both polymer formulations. An average of 126.3 ± 18.1 μg of atorvastatin calcium trihydrate was delivered, notably lower than the 687.9 ± 101.3 μg of lisinopril and 3924 ± 1011 μg of aspirin, because of the hydrophobic nature of the atorvastatin molecule and hence poor dissolution from the array. Polymer deposition into the skin may be an issue with repeat application of such a MN array, hence future work will consider more appropriate MN systems for continuous use, alongside tailoring delivery to less hydrophilic compounds.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The advent of microneedle (MN) technology has provided a revolutionary platform for the delivery of therapeutic agents, particularly in the field of gene therapy. For over 20 years, the area of gene therapy has undergone intense innovation and progression which has seen advancement of the technology from an experimental concept to a widely acknowledged strategy for the treatment and prevention of numerous disease states. However, the true potential of gene therapy has yet to be achieved due to limitations in formulation and delivery technologies beyond parenteral injection of the DNA. Microneedle-mediated delivery provides a unique platform for the delivery of DNA therapeutics clinically. It provides a means to overcome the skin barriers to gene delivery and deposit the DNA directly into the dermal layers, a key site for delivery of therapeutics to treat a wide range of skin and cutaneous diseases. Additionally, the skin is a tissue rich in immune sentinels, an ideal target for the delivery of a DNA vaccine directly to the desired target cell populations. This review details the advancement of MN-mediated DNA delivery from proof-of-concept to the delivery of DNA encoding clinically relevant proteins and antigens and examines the key considerations for the improvement of the technology and progress into a clinically applicable delivery system.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Interesting wireless networking scenarios exist wherein network services must be guaranteed in a dynamic fashion for some priority users. For example, in disaster recovery, members need to be able to quickly block other users in order to gain sole use of the radio channel. As it is not always feasible to physically switch off other users, we propose a new approach, termed selective packet destruction (SPD) to ensure service for priority users. A testbed for SPD has been created, based on the Rice University Wireless open-Access Research Platform and been used to examine the feasibility of our approach. Results from the testbed are presented to demonstrate the feasibility of SPD and show how a balance between performance and acknowledgement destruction rate can be achieved. A 90% reduction in TCP & UDP traffic is achieved for a 75% MAC ACK destruction rate.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The sustainable control of animal parasitic nematodes requires the development of efficient functional genomics platforms to facilitate target validation and enhance anthelmintic discovery. Unfortunately, the utility of RNA interference (RNAi) for the validation of novel drug targets in nematode parasites remains problematic. Ascaris suum is an important veterinary parasite and a zoonotic pathogen. Here we show that adult A. suum is RNAi competent, and highlight the induction, spread and consistency of RNAi across multiple tissue types. This platform provides a new opportunity to undertake whole organism-, tissue- and cell-level gene function studies to enhance target validation processes for nematode parasites of veterinary/medical significance.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The worldwide scarcity of women studying or employed in ICT, or in computing related disciplines, continues to be a topic of concern for industry, the education sector and governments. Within Europe while females make up 46% of the workforce only 17% of IT staff are female. A similar gender divide trend is repeated worldwide, with top technology employers in Silicon Valley, including Facebook, Google, Twitter and Apple reporting that only 30% of the workforce is female (Larson 2014). Previous research into this gender divide suggests that young women in Secondary Education display a more negative attitude towards computing than their male counterparts. It would appear that the negative female perception of computing has led to representatively low numbers of women studying ICT at a tertiary level and consequently an under representation of females within the ICT industry. The aim of this study is to 1) establish a baseline understanding of the attitudes and perceptions of Secondary Education pupils in regard to computing and 2) statistically establish if young females in Secondary Education really do have a more negative attitude towards computing.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The increasing complexity and scale of cloud computing environments due to widespread data centre heterogeneity makes measurement-based evaluations highly difficult to achieve. Therefore the use of simulation tools to support decision making in cloud computing environments to cope with this problem is an increasing trend. However the data required in order to model cloud computing environments with an appropriate degree of accuracy is typically large, very difficult to collect without some form of automation, often not available in a suitable format and a time consuming process if done manually. In this research, an automated method for cloud computing topology definition, data collection and model creation activities is presented, within the context of a suite of tools that have been developed and integrated to support these activities.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Recent research in industrial organisation has investigated the essential place that middlemen have in the networks that make up our global economy. In this paper we attempt to understand how such middlemen compete with each other through a game theoretic analysis using novel techniques from decision-making under ambiguity.
We model a purposely abstract and reduced model of one middleman who provides a two-sided platform, mediating surplus-creating interactions between two users. The middleman evaluates uncertain outcomes under positional ambiguity, taking into account the possibility of the emergence of an alternative middleman offering intermediary services to the two users.
Surprisingly, we find many situations in which the middleman will purposely extract maximal gains from her position. Only if there is relatively low probability of devastating loss of business under competition, the middleman will adopt a more competitive attitude and extract less from her position.