919 resultados para Drop on Demand


Relevância:

80.00% 80.00%

Publicador:

Resumo:

Al Large Hadron Collider (LHC) ogni anno di acquisizione dati vengono raccolti più di 30 petabyte di dati dalle collisioni. Per processare questi dati è necessario produrre un grande volume di eventi simulati attraverso tecniche Monte Carlo. Inoltre l'analisi fisica richiede accesso giornaliero a formati di dati derivati per centinaia di utenti. La Worldwide LHC Computing GRID (WLCG) è una collaborazione interazionale di scienziati e centri di calcolo che ha affrontato le sfide tecnologiche di LHC, rendendone possibile il programma scientifico. Con il prosieguo dell'acquisizione dati e la recente approvazione di progetti ambiziosi come l'High-Luminosity LHC, si raggiungerà presto il limite delle attuali capacità di calcolo. Una delle chiavi per superare queste sfide nel prossimo decennio, anche alla luce delle ristrettezze economiche dalle varie funding agency nazionali, consiste nell'ottimizzare efficientemente l'uso delle risorse di calcolo a disposizione. Il lavoro mira a sviluppare e valutare strumenti per migliorare la comprensione di come vengono monitorati i dati sia di produzione che di analisi in CMS. Per questa ragione il lavoro è comprensivo di due parti. La prima, per quanto riguarda l'analisi distribuita, consiste nello sviluppo di uno strumento che consenta di analizzare velocemente i log file derivanti dalle sottomissioni di job terminati per consentire all'utente, alla sottomissione successiva, di sfruttare meglio le risorse di calcolo. La seconda parte, che riguarda il monitoring di jobs sia di produzione che di analisi, sfrutta tecnologie nel campo dei Big Data per un servizio di monitoring più efficiente e flessibile. Un aspetto degno di nota di tali miglioramenti è la possibilità di evitare un'elevato livello di aggregazione dei dati già in uno stadio iniziale, nonché di raccogliere dati di monitoring con una granularità elevata che tuttavia consenta riprocessamento successivo e aggregazione “on-demand”.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

A sociedade digital nos abraça em todos os aspectos do cotidiano e uma parte significativa da população vive conectada em multiplataformas. Com a instantaneidade dos fluxos de comunicação, vivemos uma rotina onde muitos acessos estão a um “clique” ou toque. A televisão como mídia preponderante durante várias décadas, na sua transição digital comporta uma função além da TV que conhecíamos, como display interativo que se conecta e absorve conteúdos provenientes de várias fontes. Os consagrados modelos mundiais de distribuição de audiovisual, especialmente pelo Broadcast, sofrem as consequências da mudança do comportamento do seu público pelas novas oportunidades de acesso aos conteúdos, agora interativos e sob demanda. Neste contexto, os modelos das SmartTVs (TVs conectadas) em Broadband (Banda Larga) apresentam opções diferenciadas e requerem um espaço cada vez maior na conexão com todos os outros displays. Com este cenário, o presente estudo busca descrever e analisar as novas ofertas de conteúdos, aplicativos, possibilidades e tendências do hibridismo das fontes para a futura TV.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Work-life balance (WLB) is a key issue in our societies in which there is increasing pressure to be permanently available on demand and to work more intensively, and when due to technological change the borders between work and private life appear to be dissolving. However, the social, institutional and normative frames of a region have a huge impact on how people experience work and private life, where the borders between these spheres lie and how much control individuals have in managing these borders. Based on these arguments, this editorial to the special issue Work-life balance/imbalance: individual, organisational and social experiences in Intersections. EEJSP draws attention to the social institutions, frameworks and norms which have an effect on experience, practices and expectations about work-life balance. Concerning the time horizon, this editorial focuses on the change of regime as a reference point since socialist and post-socialist eras differ significantly, although there is still some continuity between them. The authors of this introduction offer an overview of the situation in CEE (Central and Eastern Europe) based mainly on examples of Visegrad countries.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Nucleic Acid hairpins have been a subject of study for the last four decades. They are composed of single strand that is

hybridized to itself, and the central section forming an unhybridized loop. In nature, they stabilize single stranded RNA, serve as nucleation

sites for RNA folding, protein recognition signals, mRNA localization and regulation of mRNA degradation. On the other hand,

DNA hairpins in biological contexts have been studied with respect to forming cruciform structures that can regulate gene expression.

The use of DNA hairpins as fuel for synthetic molecular devices, including locomotion, was proposed and experimental demonstrated in 2003. They

were interesting because they bring to the table an on-demand energy/information supply mechanism.

The energy/information is hidden (from hybridization) in the hairpin’s loop, until required.

The energy/information is harnessed by opening the stem region, and exposing the single stranded loop section.

The loop region is now free for possible hybridization and help move the system into a thermodynamically favourable state.

The hidden energy and information coupled with

programmability provides another functionality, of selectively choosing what reactions to hide and

what reactions to allow to proceed, that helps develop a topological sequence of events.

Hairpins have been utilized as a source of fuel for many different DNA devices. In this thesis, we program four different

molecular devices using DNA hairpins, and experimentally validate them in the

laboratory. 1) The first device: A

novel enzyme-free autocatalytic self-replicating system composed entirely of DNA that operates isothermally. 2) The second

device: Time-Responsive Circuits using DNA have two properties: a) asynchronous: the final output is always correct

regardless of differences in the arrival time of different inputs.

b) renewable circuits which can be used multiple times without major degradation of the gate motifs

(so if the inputs change over time, the DNA-based circuit can re-compute the output correctly based on the new inputs).

3) The third device: Activatable tiles are a theoretical extension to the Tile assembly model that enhances

its robustness by protecting the sticky sides of tiles until a tile is partially incorporated into a growing assembly.

4) The fourth device: Controlled Amplification of DNA catalytic system: a device such that the amplification

of the system does not run uncontrollably until the system runs out of fuel, but instead achieves a finite

amount of gain.

Nucleic acid circuits with the ability

to perform complex logic operations have many potential practical applications, for example the ability to achieve point of care diagnostics.

We discuss the designs of our DNA Hairpin molecular devices, the results we have obtained, and the challenges we have overcome

to make these truly functional.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Secure Access For Everyone (SAFE), is an integrated system for managing trust

using a logic-based declarative language. Logical trust systems authorize each

request by constructing a proof from a context---a set of authenticated logic

statements representing credentials and policies issued by various principals

in a networked system. A key barrier to practical use of logical trust systems

is the problem of managing proof contexts: identifying, validating, and

assembling the credentials and policies that are relevant to each trust

decision.

SAFE addresses this challenge by (i) proposing a distributed authenticated data

repository for storing the credentials and policies; (ii) introducing a

programmable credential discovery and assembly layer that generates the

appropriate tailored context for a given request. The authenticated data

repository is built upon a scalable key-value store with its contents named by

secure identifiers and certified by the issuing principal. The SAFE language

provides scripting primitives to generate and organize logic sets representing

credentials and policies, materialize the logic sets as certificates, and link

them to reflect delegation patterns in the application. The authorizer fetches

the logic sets on demand, then validates and caches them locally for further

use. Upon each request, the authorizer constructs the tailored proof context

and provides it to the SAFE inference for certified validation.

Delegation-driven credential linking with certified data distribution provides

flexible and dynamic policy control enabling security and trust infrastructure

to be agile, while addressing the perennial problems related to today's

certificate infrastructure: automated credential discovery, scalable

revocation, and issuing credentials without relying on centralized authority.

We envision SAFE as a new foundation for building secure network systems. We

used SAFE to build secure services based on case studies drawn from practice:

(i) a secure name service resolver similar to DNS that resolves a name across

multi-domain federated systems; (ii) a secure proxy shim to delegate access

control decisions in a key-value store; (iii) an authorization module for a

networked infrastructure-as-a-service system with a federated trust structure

(NSF GENI initiative); and (iv) a secure cooperative data analytics service

that adheres to individual secrecy constraints while disclosing the data. We

present empirical evaluation based on these case studies and demonstrate that

SAFE supports a wide range of applications with low overhead.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The emerging technologies have expanded a new dimension of self – ‘technoself’ driven by socio-technical innovations and taken an important step forward in pervasive learning. Technology Enhanced Learning (TEL) research has increasingly focused on emergent technologies such as Augmented Reality (AR) for augmented learning, mobile learning, and game-based learning in order to improve self-motivation and self-engagement of the learners in enriched multimodal learning environments. These researches take advantage of technological innovations in hardware and software across different platforms and devices including tablets, phoneblets and even game consoles and their increasing popularity for pervasive learning with the significant development of personalization processes which place the student at the center of the learning process. In particular, augmented reality (AR) research has matured to a level to facilitate augmented learning, which is defined as an on-demand learning technique where the learning environment adapts to the needs and inputs from learners. In this paper we firstly study the role of Technology Acceptance Model (TAM) which is one of the most influential theories applied in TEL on how learners come to accept and use a new technology. Then we present the design methodology of the technoself approach for pervasive learning and introduce technoself enhanced learning as a novel pedagogical model to improve student engagement by shaping personal learning focus and setting. Furthermore we describe the design and development of an AR-based interactive digital interpretation system for augmented learning and discuss key features. By incorporating mobiles, game simulation, voice recognition, and multimodal interaction through Augmented Reality, the learning contents can be geared toward learner's needs and learners can stimulate discovery and gain greater understanding. The system demonstrates that Augmented Reality can provide rich contextual learning environment and contents tailored for individuals. Augment learning via AR can bridge this gap between the theoretical learning and practical learning, and focus on how the real and virtual can be combined together to fulfill different learning objectives, requirements, and even environments. Finally, we validate and evaluate the AR-based technoself enhanced learning approach to enhancing the student motivation and engagement in the learning process through experimental learning practices. It shows that Augmented Reality is well aligned with constructive learning strategies, as learners can control their own learning and manipulate objects that are not real in augmented environment to derive and acquire understanding and knowledge in a broad diversity of learning practices including constructive activities and analytical activities.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Diversas consideraciones históricas e ideológicas han impedido la difusión de la obra fundamental de Sraffa Producción de mercancías por medio de mercancías. La obra de Sraffa ha sido estudiada fundamentalmente en sus aspectos matemáticos por economistas tales como Garegnani, Abraham-Frois, Pasinetti, Steedman, Kurz, Roncaglia, etc. pero sin desarrollar apenas sus aspectos teóricos y económicos. En este artículo se hacen algunas consideraciones sobre posibles desarrollos del libro de Sraffa a partir de su propio esquema de pensamiento.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Graphics Processing Units (GPUs) are becoming popular accelerators in modern High-Performance Computing (HPC) clusters. Installing GPUs on each node of the cluster is not efficient resulting in high costs and power consumption as well as underutilisation of the accelerator. The research reported in this paper is motivated towards the use of few physical GPUs by providing cluster nodes access to remote GPUs on-demand for a financial risk application. We hypothesise that sharing GPUs between several nodes, referred to as multi-tenancy, reduces the execution time and energy consumed by an application. Two data transfer modes between the CPU and the GPUs, namely concurrent and sequential, are explored. The key result from the experiments is that multi-tenancy with few physical GPUs using sequential data transfers lowers the execution time and the energy consumed, thereby improving the overall performance of the application.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Thesis (Master's)--University of Washington, 2016-08

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Background: Interprofessionalism, considered as collaboration between medical professionals, has gained prominence over recent decades and evidence for its impact has grown. The steadily increasing number of residents in nursing homes will challenge medical care and the interaction across professions, especially nurses and general practitioners (GPS). The nursing home visit, a key element of medical care, has been underrepresented in research. This study explores GP perspectives on interprofessional collaboration with a focus on their visits to nursing homes in order to understand their experiences and expectations. This research represents an aspect of the interprof study, which explores medical care needs as well as the perceived collaboration and communication by nursing home residents, their families, GPS and nurses. This paper focusses on GPS' views, investigating in particular their visits to nursing homes in order to understand their experiences. Methods: Open guideline-interviews covering interprofessional collaboration and the visit process were conducted with 30 GPS in three study centers and analyzed with grounded theory methodology. GPS were recruited via postal request and existing networks of the research partners. Results: Four different types of nursing home visits were found: visits on demand, periodical visits, nursing home rounds and ad-hoc-decision based visits. We identified the core category "productive performance" of home visits in nursing homes which stands for the balance of GPŚ individual efforts and rewards. GPS used different strategies to perform a productive home visit: preparing strategies, on-site strategies and investing strategies. Conclusion: We compiled a theory of GPS home visits in nursing homes in Germany. The findings will be useful for research, and scientific and management purposes to generate a deeper understanding of GP perspectives and thereby improve interprofessional collaboration to ensure a high quality of care.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The evolution and maturation of Cloud Computing created an opportunity for the emergence of new Cloud applications. High-performance Computing, a complex problem solving class, arises as a new business consumer by taking advantage of the Cloud premises and leaving the expensive datacenter management and difficult grid development. Standing on an advanced maturing phase, today’s Cloud discarded many of its drawbacks, becoming more and more efficient and widespread. Performance enhancements, prices drops due to massification and customizable services on demand triggered an emphasized attention from other markets. HPC, regardless of being a very well established field, traditionally has a narrow frontier concerning its deployment and runs on dedicated datacenters or large grid computing. The problem with common placement is mainly the initial cost and the inability to fully use resources which not all research labs can afford. The main objective of this work was to investigate new technical solutions to allow the deployment of HPC applications on the Cloud, with particular emphasis on the private on-premise resources – the lower end of the chain which reduces costs. The work includes many experiments and analysis to identify obstacles and technology limitations. The feasibility of the objective was tested with new modeling, architecture and several applications migration. The final application integrates a simplified incorporation of both public and private Cloud resources, as well as HPC applications scheduling, deployment and management. It uses a well-defined user role strategy, based on federated authentication and a seamless procedure to daily usage with balanced low cost and performance.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Avec l’avènement des objets connectés, la bande passante nécessaire dépasse la capacité des interconnections électriques et interface sans fils dans les réseaux d’accès mais aussi dans les réseaux coeurs. Des systèmes photoniques haute capacité situés dans les réseaux d’accès utilisant la technologie radio sur fibre systèmes ont été proposés comme solution dans les réseaux sans fil de 5e générations. Afin de maximiser l’utilisation des ressources des serveurs et des ressources réseau, le cloud computing et des services de stockage sont en cours de déploiement. De cette manière, les ressources centralisées pourraient être diffusées de façon dynamique comme l’utilisateur final le souhaite. Chaque échange nécessitant une synchronisation entre le serveur et son infrastructure, une couche physique optique permet au cloud de supporter la virtualisation des réseaux et de les définir de façon logicielle. Les amplificateurs à semi-conducteurs réflectifs (RSOA) sont une technologie clé au niveau des ONU(unité de communications optiques) dans les réseaux d’accès passif (PON) à fibres. Nous examinons ici la possibilité d’utiliser un RSOA et la technologie radio sur fibre pour transporter des signaux sans fil ainsi qu’un signal numérique sur un PON. La radio sur fibres peut être facilement réalisée grâce à l’insensibilité a la longueur d’onde du RSOA. Le choix de la longueur d’onde pour la couche physique est cependant choisi dans les couches 2/3 du modèle OSI. Les interactions entre la couche physique et la commutation de réseaux peuvent être faites par l’ajout d’un contrôleur SDN pour inclure des gestionnaires de couches optiques. La virtualisation réseau pourrait ainsi bénéficier d’une couche optique flexible grâce des ressources réseau dynamique et adaptée. Dans ce mémoire, nous étudions un système disposant d’une couche physique optique basé sur un RSOA. Celle-ci nous permet de façon simultanée un envoi de signaux sans fil et le transport de signaux numérique au format modulation tout ou rien (OOK) dans un système WDM(multiplexage en longueur d’onde)-PON. Le RSOA a été caractérisé pour montrer sa capacité à gérer une plage dynamique élevée du signal sans fil analogique. Ensuite, les signaux RF et IF du système de fibres sont comparés avec ses avantages et ses inconvénients. Finalement, nous réalisons de façon expérimentale une liaison point à point WDM utilisant la transmission en duplex intégral d’un signal wifi analogique ainsi qu’un signal descendant au format OOK. En introduisant deux mélangeurs RF dans la liaison montante, nous avons résolu le problème d’incompatibilité avec le système sans fil basé sur le TDD (multiplexage en temps duplexé).

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Recent paradigms in wireless communication architectures describe environments where nodes present a highly dynamic behavior (e.g., User Centric Networks). In such environments, routing is still performed based on the regular packet-switched behavior of store-and-forward. Albeit sufficient to compute at least an adequate path between a source and a destination, such routing behavior cannot adequately sustain the highly nomadic lifestyle that Internet users are today experiencing. This thesis aims to analyse the impact of the nodes’ mobility on routing scenarios. It also aims at the development of forwarding concepts that help in message forwarding across graphs where nodes exhibit human mobility patterns, as is the case of most of the user-centric wireless networks today. The first part of the work involved the analysis of the mobility impact on routing, and we found that node mobility significance can affect routing performance, and it depends on the link length, distance, and mobility patterns of nodes. The study of current mobility parameters showed that they capture mobility partially. The routing protocol robustness to node mobility depends on the routing metric sensitivity to node mobility. As such, mobility-aware routing metrics were devised to increase routing robustness to node mobility. Two categories of routing metrics proposed are the time-based and spatial correlation-based. For the validation of the metrics, several mobility models were used, which include the ones that mimic human mobility patterns. The metrics were implemented using the Network Simulator tool using two widely used multi-hop routing protocols of Optimized Link State Routing (OLSR) and Ad hoc On Demand Distance Vector (AODV). Using the proposed metrics, we reduced the path re-computation frequency compared to the benchmark metric. This means that more stable nodes were used to route data. The time-based routing metrics generally performed well across the different node mobility scenarios used. We also noted a variation on the performance of the metrics, including the benchmark metric, under different mobility models, due to the differences in the node mobility governing rules of the models.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Rapid, sensitive and selective detection of chemical hazards and biological pathogens has shown growing importance in the fields of homeland security, public safety and personal health. In the past two decades, efforts have been focusing on performing point-of-care chemical and biological detections using miniaturized biosensors. These sensors convert target molecule binding events into measurable electrical signals for quantifying target molecule concentration. However, the low receptor density and the use of complex surface chemistry in receptors immobilization on transducers are common bottlenecks in the current biosensor development, adding to the cost, complexity and time. This dissertation presents the development of selective macromolecular Tobacco mosaic virus-like particle (TMV VLP) biosensing receptor, and the microsystem integration of VLPs in microfabricated electrochemical biosensors for rapid and performance-enhanced chemical and biological sensing. Two constructs of VLPs carrying different receptor peptides targeting at 2,4,6-trinitrotoluene (TNT) explosive or anti-FLAG antibody are successfully bioengineered. The VLP-based TNT electrochemical sensor utilizes unique diffusion modulation method enabled by biological binding between target TNT and receptor VLP. The method avoids the influence from any interfering species and environmental background signals, making it extremely suitable for directly quantifying the TNT level in a sample. It is also a rapid method that does not need any sensor surface functionalization process. For antibody sensing, the VLPs carrying both antibody binding peptides and cysteine residues are assembled onto the gold electrodes of an impedance microsensor. With two-phase immunoassays, the VLP-based impedance sensor is able to quantify antibody concentrations down to 9.1 ng/mL. A capillary microfluidics and impedance sensor integrated microsystem is developed to further accelerate the process of VLP assembly on sensors and improve the sensitivity. Open channel capillary micropumps and stop-valves facilitate localized and evaporation-assisted VLP assembly on sensor electrodes within 6 minutes. The VLP-functionalized impedance sensor is capable of label-free sensing of antibodies with the detection limit of 8.8 ng/mL within 5 minutes after sensor functionalization, demonstrating great potential of VLP-based sensors for rapid and on-demand chemical and biological sensing.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In many major cities, fixed route transit systems such as bus and rail serve millions of trips per day. These systems have people collect at common locations (the station or stop), and board at common times (for example according to a predetermined schedule or headway). By using common service locations and times, these modes can consolidate many trips that have similar origins and destinations or overlapping routes. However, the routes are not sensitive to changing travel patterns, and have no way of identifying which trips are going unserved, or are poorly served, by the existing routes. On the opposite end of the spectrum, personal modes of transportation, such as a private vehicle or taxi, offer service to and from the exact origin and destination of a rider, at close to exactly the time they desire to travel. Despite the apparent increased convenience to users, the presence of a large number of small vehicles results in a disorganized, and potentially congested road network during high demand periods. The focus of the research presented in this paper is to develop a system that possesses both the on-demand nature of a personal mode, with the efficiency of shared modes. In this system, users submit their request for travel, but are asked to make small compromises in their origin and destination location by walking to a nearby meeting point, as well as slightly modifying their time of travel, in order to accommodate other passengers. Because the origin and destination location of the request can be adjusted, this is a more general case of the Dial-a-Ride problem with time windows. The solution methodology uses a graph clustering algorithm coupled with a greedy insertion technique. A case study is presented using actual requests for taxi trips in Washington DC, and shows a significant decrease in the number of vehicles required to serve the demand.