892 resultados para Mannheim metric


Relevância:

20.00% 20.00%

Publicador:

Resumo:

This research focuses on automatically adapting a search engine size in response to fluctuations in query workload. Deploying a search engine in an Infrastructure as a Service (IaaS) cloud facilitates allocating or deallocating computer resources to or from the engine. Our solution is to contribute an adaptive search engine that will repeatedly re-evaluate its load and, when appropriate, switch over to a dierent number of active processors. We focus on three aspects and break them out into three sub-problems as follows: Continually determining the Number of Processors (CNP), New Grouping Problem (NGP) and Regrouping Order Problem (ROP). CNP means that (in the light of the changes in the query workload in the search engine) there is a problem of determining the ideal number of processors p active at any given time to use in the search engine and we call this problem CNP. NGP happens when changes in the number of processors are determined and it must also be determined which groups of search data will be distributed across the processors. ROP is how to redistribute this data onto processors while keeping the engine responsive and while also minimising the switchover time and the incurred network load. We propose solutions for these sub-problems. For NGP we propose an algorithm for incrementally adjusting the index to t the varying number of virtual machines. For ROP we present an ecient method for redistributing data among processors while keeping the search engine responsive. Regarding the solution for CNP, we propose an algorithm determining the new size of the search engine by re-evaluating its load. We tested the solution performance using a custom-build prototype search engine deployed in the Amazon EC2 cloud. Our experiments show that when we compare our NGP solution with computing the index from scratch, the incremental algorithm speeds up the index computation 2{10 times while maintaining a similar search performance. The chosen redistribution method is 25% to 50% faster than other methods and reduces the network load around by 30%. For CNP we present a deterministic algorithm that shows a good ability to determine a new size of search engine. When combined, these algorithms give an adapting algorithm that is able to adjust the search engine size with a variable workload.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The attempts at carrying out terrorist attacks have become more prevalent. As a result, an increasing number of countries have become particularly vigilant against the means by which terrorists raise funds to finance their draconian acts against human life and property. Among the many counter-terrorism agencies in operation, governments have set up financial intelligence units (FIUs) within their borders for the purpose of tracking down terrorists’ funds. By investigating reported suspicious transactions, FIUs attempt to weed out financial criminals who use these illegal funds to finance terrorist activity. The prominent role played by FIUs means that their performance is always under the spotlight. By interviewing experts and conducting surveys of those associated with the fight against financial crime, this study investigated perceptions of FIU performance on a comparative basis between American and non-American FIUs. The target group of experts included financial institution personnel, civilian agents, law enforcement personnel, academicians, and consultants. Questions for the interview and surveys were based on the Kaplan and Norton’s Balanced Scorecard (BSC) methodology. One of the objectives of this study was to help determine the suitability of the BSC to this arena. While FIUs in this study have concentrated on performance by measuring outputs such as the number of suspicious transaction reports investigated, this study calls for a focus on outcomes involving all the parties responsible for financial criminal investigations. It is only through such an integrated approach that these various entities will be able to improve performance in solving financial crime. Experts in financial intelligence strongly believed that the quality and timeliness of intelligence was more important than keeping track of the number of suspicious transaction reports. Finally, this study concluded that the BSC could be appropriately applied to the arena of financial crime prevention even though the emphasis is markedly different from that in the private sector. While priority in the private sector is given to financial outcomes, in this arena employee growth and internal processes were perceived as most important in achieving a satisfactory outcome.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Publisher PDF

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Objective
Scant evidence is available on the discordance between loneliness and social isolation among older adults. We aimed to investigate this discordance and any health implications that it may have.

Method
Using nationally representative datasets from ageing cohorts in Ireland (TILDA) and England (ELSA), we created a metric of discordance between loneliness and social isolation, to which we refer as Social Asymmetry. This metric was the categorised difference between standardised scores on a scale of loneliness and a scale of social isolation, giving categories of: Concordantly Lonely and Isolated, Discordant: Robust to Loneliness, or Discordant: Susceptible to Loneliness. We used regression and multilevel modelling to identify potential relationships between Social Asymmetry and cognitive outcomes.

Results
Social Asymmetry predicted cognitive outcomes cross-sectionally and at a two-year follow-up, such that Discordant: Robust to Loneliness individuals were superior performers, but we failed to find evidence for Social Asymmetry as a predictor of cognitive trajectory over time.

Conclusions
We present a new metric and preliminary evidence of a relationship with clinical outcomes. Further research validating this metric in different populations, and evaluating its relationship with other outcomes, is warranted.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Resilience is widely accepted as a desirable system property for cyber-physical systems. However, there are no metrics that can be used to measure the resilience of cyber-physical systems (CPS) while the multi-dimensional nature of performance in these systems is considered. In this work, we present first results towards a resilience metric framework. The key contributions of this framework are threefold: First, it allows to evaluate resilience with respect to different performance indicators that are of interest. Second, complexities that are relevant to the performance indicators of interest, can be intentionally abstracted. Third and final, it supports the identification of reasons for good or bad resilience to improve system design.

Relevância:

20.00% 20.00%

Publicador:

Relevância:

20.00% 20.00%

Publicador:

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This is a project of the School of Library, Documentation and Information of the National University, is performed to support the initiative of UNESCO to build the Memory of the World (UNESCO, 1995) and to help provide universal access to documentation. To this end, the School of Library Science students has promoted the realization of final graduation work on documentary control of domestic production. This project has the following objectives:Objectives1. Conduct mapping national documentary through the identification, analysis, organization and access to documentary heritage of Costa Rica, to contribute to the Memory of the World.2. Perform bibliometric analysis of documentary records contained in the integrated databases.This project seeks to bring undergraduate students graduating from the school, in making final graduation work on document control. Students have the opportunity to make final graduation work on the documentary production of Costa Rica on a specific theme or on a country's geographical area.Desk audits aimed at identifying the document using access points and indicate its contents to allow recovery by the user.The result is the production of a new document, other than the original, a secondary document: the bibliography. The records in the database each control documentation completed work will be integrated into a single database to be placed on the website of EBDI, for consultation of researchers and interested users.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this contribution, we propose a first general definition of rank-metric convolutional codes for multi-shot network coding. To this aim, we introduce a suitable concept of distance and we establish a generalized Singleton bound for this class of codes.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The aim of this paper is to provide a comprehensive study of some linear non-local diffusion problems in metric measure spaces. These include, for example, open subsets in ℝN, graphs, manifolds, multi-structures and some fractal sets. For this, we study regularity, compactness, positivity and the spectrum of the stationary non-local operator. We then study the solutions of linear evolution non-local diffusion problems, with emphasis on similarities and differences with the standard heat equation in smooth domains. In particular, we prove weak and strong maximum principles and describe the asymptotic behaviour using spectral methods.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Information entropy measured from acoustic emission (AE) waveforms is shown to be an indicator of fatigue damage in a high-strength aluminum alloy. Several tension-tension fatigue experiments were performed with dogbone samples of aluminum alloy, Al7075-T6, a commonly used material in aerospace structures. Unlike previous studies in which fatigue damage is simply measured based on visible crack growth, this work investigated fatigue damage prior to crack initiation through the use of instantaneous elastic modulus degradation. Three methods of measuring the AE information entropy, regarded as a direct measure of microstructural disorder, are proposed and compared with traditional damage-related AE features. Results show that one of the three entropy measurement methods appears to better assess damage than the traditional AE features, while the other two entropies have unique trends that can differentiate between small and large cracks.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Increasing in resolution of numerical weather prediction models has allowed more and more realistic forecasts of atmospheric parameters. Due to the growing variability into predicted fields the traditional verification methods are not always able to describe the model ability because they are based on a grid-point-by-grid-point matching between observation and prediction. Recently, new spatial verification methods have been developed with the aim of show the benefit associated to the high resolution forecast. Nested in among of the MesoVICT international project, the initially aim of this work is to compare the newly tecniques remarking advantages and disadvantages. First of all, the MesoVICT basic examples, represented by synthetic precipitation fields, have been examined. Giving an error evaluation in terms of structure, amplitude and localization of the precipitation fields, the SAL method has been studied more thoroughly respect to the others approaches with its implementation in the core cases of the project. The verification procedure has concerned precipitation fields over central Europe: comparisons between the forecasts performed by the 00z COSMO-2 model and the VERA (Vienna Enhanced Resolution Analysis) have been done. The study of these cases has shown some weaknesses of the methodology examined; in particular has been highlighted the presence of a correlation between the optimal domain size and the extention of the precipitation systems. In order to increase ability of SAL, a subdivision of the original domain in three subdomains has been done and the method has been applied again. Some limits have been found in cases in which at least one of the two domains does not show precipitation. The overall results for the subdomains have been summarized on scatter plots. With the aim to identify systematic errors of the model the variability of the three parameters has been studied for each subdomain.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Describes three units of time helpful for understanding and evaluating classificatory structures: long time (versions and states of classification schemes), short time (the act of indexing as repeated ritual or form), and micro-time (where stages of the interpretation process of indexing are separated out and inventoried). Concludes with a short discussion of how time and the impermanence of classification also conjures up an artistic conceptualization of indexing, and briefly uses that to question the seemingly dominant understanding of classification practice as outcome of scientific management and assembly line thought.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

As the universe of knowledge and subjects change over time, indexing languages like classification schemes, accommodate that change by restructuring. Restructuring indexing languages affects indexer and cataloguer work. Subjects may split or lump together. They may disappear only to reappear later. And new subjects may emerge that were assumed to be already present, but not clearly articulated (Miksa, 1998). In this context we have the complex relationship between the indexing language, the text being described, and the already described collection (Tennis, 2007). It is possible to imagine indexers placing a document into an outdated class, because it is the one they have already used for their collection. However, doing this erases the semantics in the present indexing language. Given this range of choice in the context of indexing language change, the question arises, what does this look like in practice? How often does this occur? Further, what does this phenomenon tell us about subjects in indexing languages? Does the practice we observe in the reaction to indexing language change provide us evidence of conceptual models of subjects and subject creation? If it is incomplete, but gets us close, what evidence do we still require?

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Dopo lo sviluppo dei primi casi di Covid-19 in Cina nell’autunno del 2019, ad inizio 2020 l’intero pianeta è precipitato in una pandemia globale che ha stravolto le nostre vite con conseguenze che non si vivevano dall’influenza spagnola. La grandissima quantità di paper scientifici in continua pubblicazione sul coronavirus e virus ad esso affini ha portato alla creazione di un unico dataset dinamico chiamato CORD19 e distribuito gratuitamente. Poter reperire informazioni utili in questa mole di dati ha ulteriormente acceso i riflettori sugli information retrieval systems, capaci di recuperare in maniera rapida ed efficace informazioni preziose rispetto a una domanda dell'utente detta query. Di particolare rilievo è stata la TREC-COVID Challenge, competizione per lo sviluppo di un sistema di IR addestrato e testato sul dataset CORD19. Il problema principale è dato dal fatto che la grande mole di documenti è totalmente non etichettata e risulta dunque impossibile addestrare modelli di reti neurali direttamente su di essi. Per aggirare il problema abbiamo messo a punto nuove soluzioni self-supervised, a cui abbiamo applicato lo stato dell'arte del deep metric learning e dell'NLP. Il deep metric learning, che sta avendo un enorme successo soprattuto nella computer vision, addestra il modello ad "avvicinare" tra loro immagini simili e "allontanare" immagini differenti. Dato che sia le immagini che il testo vengono rappresentati attraverso vettori di numeri reali (embeddings) si possano utilizzare le stesse tecniche per "avvicinare" tra loro elementi testuali pertinenti (e.g. una query e un paragrafo) e "allontanare" elementi non pertinenti. Abbiamo dunque addestrato un modello SciBERT con varie loss, che ad oggi rappresentano lo stato dell'arte del deep metric learning, in maniera completamente self-supervised direttamente e unicamente sul dataset CORD19, valutandolo poi sul set formale TREC-COVID attraverso un sistema di IR e ottenendo risultati interessanti.