925 resultados para Secure Data Storage


Relevância:

30.00% 30.00%

Publicador:

Resumo:

L’esperimento CMS a LHC ha raccolto ingenti moli di dati durante Run-1, e sta sfruttando il periodo di shutdown (LS1) per evolvere il proprio sistema di calcolo. Tra i possibili miglioramenti al sistema, emergono ampi margini di ottimizzazione nell’uso dello storage ai centri di calcolo di livello Tier-2, che rappresentano - in Worldwide LHC Computing Grid (WLCG)- il fulcro delle risorse dedicate all’analisi distribuita su Grid. In questa tesi viene affrontato uno studio della popolarità dei dati di CMS nell’analisi distribuita su Grid ai Tier-2. Obiettivo del lavoro è dotare il sistema di calcolo di CMS di un sistema per valutare sistematicamente l’ammontare di spazio disco scritto ma non acceduto ai centri Tier-2, contribuendo alla costruzione di un sistema evoluto di data management dinamico che sappia adattarsi elasticamente alle diversi condizioni operative - rimuovendo repliche dei dati non necessarie o aggiungendo repliche dei dati più “popolari” - e dunque, in ultima analisi, che possa aumentare l’“analysis throughput” complessivo. Il Capitolo 1 fornisce una panoramica dell’esperimento CMS a LHC. Il Capitolo 2 descrive il CMS Computing Model nelle sue generalità, focalizzando la sua attenzione principalmente sul data management e sulle infrastrutture ad esso connesse. Il Capitolo 3 descrive il CMS Popularity Service, fornendo una visione d’insieme sui servizi di data popularity già presenti in CMS prima dell’inizio di questo lavoro. Il Capitolo 4 descrive l’architettura del toolkit sviluppato per questa tesi, ponendo le basi per il Capitolo successivo. Il Capitolo 5 presenta e discute gli studi di data popularity condotti sui dati raccolti attraverso l’infrastruttura precedentemente sviluppata. L’appendice A raccoglie due esempi di codice creato per gestire il toolkit attra- verso cui si raccolgono ed elaborano i dati.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Big data è il termine usato per descrivere una raccolta di dati così estesa in termini di volume,velocità e varietà da richiedere tecnologie e metodi analitici specifici per l'estrazione di valori significativi. Molti sistemi sono sempre più costituiti e caratterizzati da enormi moli di dati da gestire,originati da sorgenti altamente eterogenee e con formati altamente differenziati,oltre a qualità dei dati estremamente eterogenei. Un altro requisito in questi sistemi potrebbe essere il fattore temporale: sempre più sistemi hanno bisogno di ricevere dati significativi dai Big Data il prima possibile,e sempre più spesso l’input da gestire è rappresentato da uno stream di informazioni continuo. In questo campo si inseriscono delle soluzioni specifiche per questi casi chiamati Online Stream Processing. L’obiettivo di questa tesi è di proporre un prototipo funzionante che elabori dati di Instant Coupon provenienti da diverse fonti con diversi formati e protocolli di informazioni e trasmissione e che memorizzi i dati elaborati in maniera efficiente per avere delle risposte in tempo reale. Le fonti di informazione possono essere di due tipologie: XMPP e Eddystone. Il sistema una volta ricevute le informazioni in ingresso, estrapola ed elabora codeste fino ad avere dati significativi che possono essere utilizzati da terze parti. Lo storage di questi dati è fatto su Apache Cassandra. Il problema più grosso che si è dovuto risolvere riguarda il fatto che Apache Storm non prevede il ribilanciamento delle risorse in maniera automatica, in questo caso specifico però la distribuzione dei clienti durante la giornata è molto varia e ricca di picchi. Il sistema interno di ribilanciamento sfrutta tecnologie innovative come le metriche e sulla base del throughput e della latenza esecutiva decide se aumentare/diminuire il numero di risorse o semplicemente non fare niente se le statistiche sono all’interno dei valori di soglia voluti.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

AIM: The purpose of this study was to evaluate the activation of resin-modified glass ionomer restorative material (RMGI, Vitremer-3M-ESPE, A3) by halogen lamp (QTH) or light-emitting diode (LED) by Knoop microhardness (KHN) in two storage conditions: 24hrs and 6 months and in two depths (0 and 2 mm). MATERIALS AND METHODS: The specimens were randomly divided into 3 experimental groups (n=10) according to activation form and evaluated in depth after 24h and after 6 months of storage. Activation was performed with QTH for 40s (700 mW/cm2) and for 40 or 20 s with LED (1,200 mW/scm2). After 24 hrs and 6 months of storage at 37°C in relative humidity in lightproof container, the Knoop microhardness test was performed. Statistics Data were analysed by three-way ANOVA and Tukey post-tests (p<0.05). RESULTS: All evaluated factors showed significant differences (p<0.05). After 24 hrs there were no differences within the experimental groups. KHN at 0 mm was significantly higher than 2 mm. After 6 months, there was an increase of microhardness values for all groups, being the ones activated by LED higher than the ones activated by QTH. CONCLUSION: Light-activation with LED positively influenced the KHN for RMGI evaluated after 6 months.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In the middle of the twentieth century, banks changed from ‘closed’ designs signifying wealth, security, and safety to ‘open’ designs signifying hospitality, honesty, and transparency as the perception of money changed from a passive physical substance to be slowly accumulated to an active notational substance to be kept in motion. If money is saved, customers must trust that the bank is secure and their money will be there when they want it; if money is invested, customers must trust that it is being done openly and honestly and they are being well-advised. Architecture visually communicates that the institution can be trusted in the requisite way.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Animal coloration often serves as a signal to others that may communicate traits about the individual such as toxicity, status, or quality. Colorful ornaments in many animals are often honest signals of quality assessed by mates, and different colors may beproduced by different biochemical pigments. Investigations of the mechanisms responsible for variation in color expression among birds are best when including a geographically and temporally broad sample. In order to obtain such a sample, studies such as this often use museum specimens; however, in order for museum specimens toserve as an accurate replacement, they must accurately represent living birds, or we must understand the ways in which they differ. In this thesis, I investigated the link between feather corticosterone, a hormone secreted in response to stress, and carotenoid-basedcoloration in the Red-winged Blackbird (Agelaius phoeniceus) in order to explore a mechanistic link between physiological state and color expression. Male Red-winged Blackbirds with lower feather corticosterone had significantly brighter red epaulets than birds with higher feather corticosterone, while I found no significant changes in red chroma. I also performed a methodological comparison of color change in museum specimens among different pigment types (carotenoid and psittacofulvin) and pigments in different locations in the body (feather and bill carotenoids) in order to quantify colorchange over time. Carotenoids and psittacofulvins showed significant reductions in red brightness and chroma over time in the collection, and carotenoid color changed significantly faster than psittacofulvin color. Both bill and feather carotenoids showed significant reductions in red brightness and red chroma over time, but change of both red chroma and red brightness occurred at a similar rate in feathers and bills. In order to use museum specimens of ecological research on bird coloration specimen age must be accounted for before the data can be used; however, once this is accomplished, museum- based color data may be used to draw conclusions about wild populations.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The diet of early human ancestors has received renewed theoretical interest since the discovery of elevated d13C values in the enamel of Australopithecus africanus and Paranthropus robustus. As a result, the hominin diet is hypothesized to have included C4 grass or the tissues of animals which themselves consumed C4 grass. On mechanical grounds, such a diet is incompatible with the dental morphology and dental microwear of early hominins. Most inferences, particularly for Paranthropus, favor a diet of hard or mechanically resistant foods. This discrepancy has invigorated the longstanding hypothesis that hominins consumed plant underground storage organs (USOs). Plant USOs are attractive candidate foods because many bulbous grasses and cormous sedges use C4 photosynthesis. Yet mechanical data for USOs—or any putative hominin food—are scarcely known. To fill this empirical void we measured the mechanical properties of USOs from 98 plant species from across sub-Saharan Africa. We found that rhizomes were the most resistant to deformation and fracture, followed by tubers, corms, and bulbs. An important result of this study is that corms exhibited low toughness values (mean = 265.0 J m-2) and relatively high Young’s modulus values (mean = 4.9 MPa). This combination of properties fits many descriptions of the hominin diet as consisting of hard-brittle objects. When compared to corms, bulbs are tougher (mean = 325.0 J m-2) and less stiff (mean = 2.5 MPa). Again, this combination of traits resembles dietary inferences, especially for Australopithecus, which is predicted to have consumed soft-tough foods. Lastly, we observed the roasting behavior of Hadza hunter-gatherers and measured the effects of roasting on the toughness on undomesticated tubers. Our results support assumptions that roasting lessens the work of mastication, and, by inference, the cost of digestion. Together these findings provide the first mechanical basis for discussing the adaptive advantages of roasting tubers and the plausibility of USOs in the diet of early hominins.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The role of platelets as inflammatory cells is demonstrated by the fact that they can release many growth factors and inflammatory mediators, including chemokines, when they are activated. The best known platelet chemokine family members are platelet factor 4 (PF4) and beta-thromboglobulin (beta-TG), which are synthesized in megakaryocytes, stored as preformed proteins in alpha-granules and released from activated platelets. However, platelets also contain many other chemokines such as interleukin-8 (IL-8), growth-regulating oncogene-alpha(GRO-alpha), epithelial neutrophil-activating protein 78 (ENA-78), regulated on activation normal T expressed and secreted (RANTES), macrophage inflammatory protein-1alpha (MIP-1alpha), and monocyte chemotactic protein-3 (MCP-3). They also express chemokine receptors such as CCR4, CXCR4, CCR1 and CCR3. Platelet activation is a feature of many inflammatory diseases such as heparin-induced thrombocytopenia, acquired immunodeficiency syndrome, and congestive heart failure. Substantial amounts of PF4, beta-TG and RANTES are released from platelets on activation, which may occur during storage. Although very few data are available on the in vivo effects of transfused chemokines, it has been suggested that the high incidence of adverse reactions often observed after platelet transfusions may be attributed to the chemokines present in the plasma of stored platelet concentrates.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Understanding the canopy cover of an urban environment leads to better estimates of carbon storage and more informed management decisions by urban foresters. The most commonly used method for assessing urban forest cover type extent is ground surveys, which can be both timeconsuming and expensive. The analysis of aerial photos is an alternative method that is faster, cheaper, and can cover a larger number of sites, but may be less accurate. The objectives of this paper were (1) to compare three methods of cover type assessment for Los Angeles, CA: handdelineation of aerial photos in ArcMap, supervised classification of aerial photos in ERDAS Imagine, and ground-collected data using the Urban Forest Effects (UFORE) model protocol; (2) to determine how well remote sensing methods estimate carbon storage as predicted by the UFORE model; and (3) to explore the influence of tree diameter and tree density on carbon storage estimates. Four major cover types (bare ground, fine vegetation, coarse vegetation, and impervious surfaces) were determined from 348 plots (0.039 ha each) randomly stratified according to land-use. Hand-delineation was better than supervised classification at predicting ground-based measurements of cover type and UFORE model-predicted carbon storage. Most error in supervised classification resulted from shadow, which was interpreted as unknown cover type. Neither tree diameter or tree density per plot significantly affected the relationship between carbon storage and canopy cover. The efficiency of remote sensing rather than in situ data collection allows urban forest managers the ability to quickly assess a city and plan accordingly while also preserving their often-limited budget.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The selective catalytic reduction system is a well established technology for NOx emissions control in diesel engines. A one dimensional, single channel selective catalytic reduction (SCR) model was previously developed using Oak Ridge National Laboratory (ORNL) generated reactor data for an iron-zeolite catalyst system. Calibration of this model to fit the experimental reactor data collected at ORNL for a copper-zeolite SCR catalyst is presented. Initially a test protocol was developed in order to investigate the different phenomena responsible for the SCR system response. A SCR model with two distinct types of storage sites was used. The calibration process was started with storage capacity calculations for the catalyst sample. Then the chemical kinetics occurring at each segment of the protocol was investigated. The reactions included in this model were adsorption, desorption, standard SCR, fast SCR, slow SCR, NH3 Oxidation, NO oxidation and N2O formation. The reaction rates were identified for each temperature using a time domain optimization approach. Assuming an Arrhenius form of the reaction rates, activation energies and pre-exponential parameters were fit to the reaction rates. The results indicate that the Arrhenius form is appropriate and the reaction scheme used allows the model to fit to the experimental data and also for use in real world engine studies.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper, we investigate content-centric data transmission in the context of short opportunistic contacts and base our work on an existing content-centric networking architecture. In case of short interconnection times, file transfers may not be completed and the received information is discarded. Caches in content-centric networks are used for short-term storage and do not guarantee persistence. We implemented a mechanism to extend caching on persistent storage enabling the completion of disrupted content transfers. The mechanisms have been implemented in the CCNx framework and have been evaluated on wireless mesh nodes. Our evaluations using multicast and unicast communication show that the implementation can support content transfers in opportunistic environments without significant processing and storing overhead.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We examined actor and partner effects of self-esteem on relationship satisfaction, using the actor-partner interdependence model and data from five independent samples of couples. The results indicated that self-esteem predicted the individual’s own relationship satisfaction (i.e., an actor effect) and the relationship satisfaction of his or her partner (i.e., a partner effect), controlling for the effect of the partner’s selfesteem. Gender, age, and length of relationship did not moderate the effect sizes. Moreover, using one of the samples, we tested whether secure attachment to the current partner (assessed as low attachment-related anxiety and avoidance) mediated the effects. The results showed that attachment-related anxiety and avoidance independently mediated both the actor and the partner effect of self-esteem on relationship satisfaction.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The current state of health and biomedicine includes an enormity of heterogeneous data ‘silos’, collected for different purposes and represented differently, that are presently impossible to share or analyze in toto. The greatest challenge for large-scale and meaningful analyses of health-related data is to achieve a uniform data representation for data extracted from heterogeneous source representations. Based upon an analysis and categorization of heterogeneities, a process for achieving comparable data content by using a uniform terminological representation is developed. This process addresses the types of representational heterogeneities that commonly arise in healthcare data integration problems. Specifically, this process uses a reference terminology, and associated "maps" to transform heterogeneous data to a standard representation for comparability and secondary use. The capture of quality and precision of the “maps” between local terms and reference terminology concepts enhances the meaning of the aggregated data, empowering end users with better-informed queries for subsequent analyses. A data integration case study in the domain of pediatric asthma illustrates the development and use of a reference terminology for creating comparable data from heterogeneous source representations. The contribution of this research is a generalized process for the integration of data from heterogeneous source representations, and this process can be applied and extended to other problems where heterogeneous data needs to be merged.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

OBJECTIVE: To determine whether algorithms developed for the World Wide Web can be applied to the biomedical literature in order to identify articles that are important as well as relevant. DESIGN AND MEASUREMENTS A direct comparison of eight algorithms: simple PubMed queries, clinical queries (sensitive and specific versions), vector cosine comparison, citation count, journal impact factor, PageRank, and machine learning based on polynomial support vector machines. The objective was to prioritize important articles, defined as being included in a pre-existing bibliography of important literature in surgical oncology. RESULTS Citation-based algorithms were more effective than noncitation-based algorithms at identifying important articles. The most effective strategies were simple citation count and PageRank, which on average identified over six important articles in the first 100 results compared to 0.85 for the best noncitation-based algorithm (p < 0.001). The authors saw similar differences between citation-based and noncitation-based algorithms at 10, 20, 50, 200, 500, and 1,000 results (p < 0.001). Citation lag affects performance of PageRank more than simple citation count. However, in spite of citation lag, citation-based algorithms remain more effective than noncitation-based algorithms. CONCLUSION Algorithms that have proved successful on the World Wide Web can be applied to biomedical information retrieval. Citation-based algorithms can help identify important articles within large sets of relevant results. Further studies are needed to determine whether citation-based algorithms can effectively meet actual user information needs.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Information overload is a significant problem for modern medicine. Searching MEDLINE for common topics often retrieves more relevant documents than users can review. Therefore, we must identify documents that are not only relevant, but also important. Our system ranks articles using citation counts and the PageRank algorithm, incorporating data from the Science Citation Index. However, citation data is usually incomplete. Therefore, we explore the relationship between the quantity of citation information available to the system and the quality of the result ranking. Specifically, we test the ability of citation count and PageRank to identify "important articles" as defined by experts from large result sets with decreasing citation information. We found that PageRank performs better than simple citation counts, but both algorithms are surprisingly robust to information loss. We conclude that even an incomplete citation database is likely to be effective for importance ranking.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Information overload is a significant problem for modern medicine. Searching MEDLINE for common topics often retrieves more relevant documents than users can review. Therefore, we must identify documents that are not only relevant, but also important. Our system ranks articles using citation counts and the PageRank algorithm, incorporating data from the Science Citation Index. However, citation data is usually incomplete. Therefore, we explore the relationship between the quantity of citation information available to the system and the quality of the result ranking. Specifically, we test the ability of citation count and PageRank to identify "important articles" as defined by experts from large result sets with decreasing citation information. We found that PageRank performs better than simple citation counts, but both algorithms are surprisingly robust to information loss. We conclude that even an incomplete citation database is likely to be effective for importance ranking.