4 resultados para Distributed replication system

em Duke University


Relevância:

30.00% 30.00%

Publicador:

Resumo:

BACKGROUND: Individuals without prior immunity to a vaccine vector may be more sensitive to reactions following injection, but may also show optimal immune responses to vaccine antigens. To assess safety and maximal tolerated dose of an adenoviral vaccine vector in volunteers without prior immunity, we evaluated a recombinant replication-defective adenovirus type 5 (rAd5) vaccine expressing HIV-1 Gag, Pol, and multiclade Env proteins, VRC-HIVADV014-00-VP, in a randomized, double-blind, dose-escalation, multicenter trial (HVTN study 054) in HIV-1-seronegative participants without detectable neutralizing antibodies (nAb) to the vector. As secondary outcomes, we also assessed T-cell and antibody responses. METHODOLOGY/PRINCIPAL FINDINGS: Volunteers received one dose of vaccine at either 10(10) or 10(11) adenovector particle units, or placebo. T-cell responses were measured against pools of global potential T-cell epitope peptides. HIV-1 binding and neutralizing antibodies were assessed. Systemic reactogenicity was greater at the higher dose, but the vaccine was well tolerated at both doses. Although no HIV infections occurred, commercial diagnostic assays were positive in 87% of vaccinees one year after vaccination. More than 85% of vaccinees developed HIV-1-specific T-cell responses detected by IFN-γ ELISpot and ICS assays at day 28. T-cell responses were: CD8-biased; evenly distributed across the three HIV-1 antigens; not substantially increased at the higher dose; and detected at similar frequencies one year following injection. The vaccine induced binding antibodies against at least one HIV-1 Env antigen in all recipients. CONCLUSIONS/SIGNIFICANCE: This vaccine appeared safe and was highly immunogenic following a single dose in human volunteers without prior nAb against the vector. TRIAL REGISTRATION: ClinicalTrials.gov NCT00119873.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

An enterprise information system (EIS) is an integrated data-applications platform characterized by diverse, heterogeneous, and distributed data sources. For many enterprises, a number of business processes still depend heavily on static rule-based methods and extensive human expertise. Enterprises are faced with the need for optimizing operation scheduling, improving resource utilization, discovering useful knowledge, and making data-driven decisions.

This thesis research is focused on real-time optimization and knowledge discovery that addresses workflow optimization, resource allocation, as well as data-driven predictions of process-execution times, order fulfillment, and enterprise service-level performance. In contrast to prior work on data analytics techniques for enterprise performance optimization, the emphasis here is on realizing scalable and real-time enterprise intelligence based on a combination of heterogeneous system simulation, combinatorial optimization, machine-learning algorithms, and statistical methods.

On-demand digital-print service is a representative enterprise requiring a powerful EIS.We use real-life data from Reischling Press, Inc. (RPI), a digit-print-service provider (PSP), to evaluate our optimization algorithms.

In order to handle the increase in volume and diversity of demands, we first present a high-performance, scalable, and real-time production scheduling algorithm for production automation based on an incremental genetic algorithm (IGA). The objective of this algorithm is to optimize the order dispatching sequence and balance resource utilization. Compared to prior work, this solution is scalable for a high volume of orders and it provides fast scheduling solutions for orders that require complex fulfillment procedures. Experimental results highlight its potential benefit in reducing production inefficiencies and enhancing the productivity of an enterprise.

We next discuss analysis and prediction of different attributes involved in hierarchical components of an enterprise. We start from a study of the fundamental processes related to real-time prediction. Our process-execution time and process status prediction models integrate statistical methods with machine-learning algorithms. In addition to improved prediction accuracy compared to stand-alone machine-learning algorithms, it also performs a probabilistic estimation of the predicted status. An order generally consists of multiple series and parallel processes. We next introduce an order-fulfillment prediction model that combines advantages of multiple classification models by incorporating flexible decision-integration mechanisms. Experimental results show that adopting due dates recommended by the model can significantly reduce enterprise late-delivery ratio. Finally, we investigate service-level attributes that reflect the overall performance of an enterprise. We analyze and decompose time-series data into different components according to their hierarchical periodic nature, perform correlation analysis,

and develop univariate prediction models for each component as well as multivariate models for correlated components. Predictions for the original time series are aggregated from the predictions of its components. In addition to a significant increase in mid-term prediction accuracy, this distributed modeling strategy also improves short-term time-series prediction accuracy.

In summary, this thesis research has led to a set of characterization, optimization, and prediction tools for an EIS to derive insightful knowledge from data and use them as guidance for production management. It is expected to provide solutions for enterprises to increase reconfigurability, accomplish more automated procedures, and obtain data-driven recommendations or effective decisions.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Secure Access For Everyone (SAFE), is an integrated system for managing trust

using a logic-based declarative language. Logical trust systems authorize each

request by constructing a proof from a context---a set of authenticated logic

statements representing credentials and policies issued by various principals

in a networked system. A key barrier to practical use of logical trust systems

is the problem of managing proof contexts: identifying, validating, and

assembling the credentials and policies that are relevant to each trust

decision.

SAFE addresses this challenge by (i) proposing a distributed authenticated data

repository for storing the credentials and policies; (ii) introducing a

programmable credential discovery and assembly layer that generates the

appropriate tailored context for a given request. The authenticated data

repository is built upon a scalable key-value store with its contents named by

secure identifiers and certified by the issuing principal. The SAFE language

provides scripting primitives to generate and organize logic sets representing

credentials and policies, materialize the logic sets as certificates, and link

them to reflect delegation patterns in the application. The authorizer fetches

the logic sets on demand, then validates and caches them locally for further

use. Upon each request, the authorizer constructs the tailored proof context

and provides it to the SAFE inference for certified validation.

Delegation-driven credential linking with certified data distribution provides

flexible and dynamic policy control enabling security and trust infrastructure

to be agile, while addressing the perennial problems related to today's

certificate infrastructure: automated credential discovery, scalable

revocation, and issuing credentials without relying on centralized authority.

We envision SAFE as a new foundation for building secure network systems. We

used SAFE to build secure services based on case studies drawn from practice:

(i) a secure name service resolver similar to DNS that resolves a name across

multi-domain federated systems; (ii) a secure proxy shim to delegate access

control decisions in a key-value store; (iii) an authorization module for a

networked infrastructure-as-a-service system with a federated trust structure

(NSF GENI initiative); and (iv) a secure cooperative data analytics service

that adheres to individual secrecy constraints while disclosing the data. We

present empirical evaluation based on these case studies and demonstrate that

SAFE supports a wide range of applications with low overhead.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Distributed Computing frameworks belong to a class of programming models that allow developers to

launch workloads on large clusters of machines. Due to the dramatic increase in the volume of

data gathered by ubiquitous computing devices, data analytic workloads have become a common

case among distributed computing applications, making Data Science an entire field of

Computer Science. We argue that Data Scientist's concern lays in three main components: a dataset,

a sequence of operations they wish to apply on this dataset, and some constraint they may have

related to their work (performances, QoS, budget, etc). However, it is actually extremely

difficult, without domain expertise, to perform data science. One need to select the right amount

and type of resources, pick up a framework, and configure it. Also, users are often running their

application in shared environments, ruled by schedulers expecting them to specify precisely their resource

needs. Inherent to the distributed and concurrent nature of the cited frameworks, monitoring and

profiling are hard, high dimensional problems that block users from making the right

configuration choices and determining the right amount of resources they need. Paradoxically, the

system is gathering a large amount of monitoring data at runtime, which remains unused.

In the ideal abstraction we envision for data scientists, the system is adaptive, able to exploit

monitoring data to learn about workloads, and process user requests into a tailored execution

context. In this work, we study different techniques that have been used to make steps toward

such system awareness, and explore a new way to do so by implementing machine learning

techniques to recommend a specific subset of system configurations for Apache Spark applications.

Furthermore, we present an in depth study of Apache Spark executors configuration, which highlight

the complexity in choosing the best one for a given workload.