201 resultados para Hiding-proofness
Resumo:
Sharing data among organizations often leads to mutual benefit. Recent technology in data mining has enabled efficient extraction of knowledge from large databases. This, however, increases risks of disclosing the sensitive knowledge when the database is released to other parties. To address this privacy issue, one may sanitize the original database so that the sensitive knowledge is hidden. The challenge is to minimize the side effect on the quality of the sanitized database so that nonsensitive knowledge can still be mined. In this paper, we study such a problem in the context of hiding sensitive frequent itemsets by judiciously modifying the transactions in the database. To preserve the non-sensitive frequent itemsets, we propose a border-based approach to efficiently evaluate the impact of any modification to the database during the hiding process. The quality of database can be well maintained by greedily selecting the modifications with minimal side effect. Experiments results are also reported to show the effectiveness of the proposed approach. © 2005 IEEE
Resumo:
Partial information leakage in deterministic public-key cryptosystems refers to a problem that arises when information about either the plaintext or the key is leaked in subtle ways. Quite a common case is where there are a small number of possible messages that may be sent. An attacker may be able to crack the scheme simply by enumerating all the possible ciphertexts. Two methods are proposed for facing the partial information leakage problem in RSA that incorporate a random element into the encrypted message to increase the number of possible ciphertexts. The resulting scheme is, effectively, an RSA-like cryptosystem which exhibits probabilistic encryption. The first method involves encrypting several similar messages with RSA and then using the Quadratic Residuosity Problem (QRP) to mark the intended one. In this way, an adversary who has correctly guessed two or more of the ciphertexts is still in doubt about which message is the intended one. The cryptographic strength of the combined system is equal to the computational difficulty of factorising a large integer; ideally, this should be feasible. The second scheme uses error-correcting codes for accommodating the random component. The plaintext is processed with an error-correcting code and deliberately corrupted before encryption. The introduced corruption lies within the error-correcting ability of the code, so as to enable the recovery of the original message. The random corruption offers a vast number of possible ciphertexts corresponding to a given plaintext; hence an attacker cannot deduce any useful information from it. The proposed systems are compared to other cryptosystems sharing similar characteristics, in terms of execution time and ciphertext size, so as to determine their practical utility. Finally, parameters which determine the characteristics of the proposed schemes are also examined.
Resumo:
Abstract not available
Resumo:
Degos` disease or malignant atrophic papulosis is a rare vasculopathy characterized by the presence of a typical skin lesion and visceral vascular involvement of small vessels, mainly of the digestive tract or central nervous system. The most interesting fact in this disease is the benign appearance of cutaneous lesion, hiding the occlusion of skin and visceral vessels. The author reports the case of a female patient with systemic lupus erythematosus for eight years. During her follow up, generalized skin papules were observed on the trunk and limbs, sparing her face, hands and feet, compatible with Degos` disease. Additional. imaging investigation excluded systemic involvement of the disease. Treatment with acetylsalicylic acid prevented the appearance of new cutaneous manifestations and the patient remains clinically stable on the Outpatient Clinic without complications, until this moment. Malign atrophic papulosis is a rare disease with a poor prognosis. However, its association with systemic lupus erythematosus seems to follow a more benign course, without the typical visceral involvement.
Resumo:
Mammals show extensive interspecific variation in the form of maternal care. Among ungulates, there is a dichotomy between species in which offspring follow the mother (following strategy) versus species in which offspring remain concealed (hiding strategy). Here we reveal that the same dichotomy exists among macropods (kangaroos, wallabies and allies). We test three traditional adaptive explanations and one new life history hypothesis. and find very similar patterns among both ungulates and macropods. The three traditional explanations that we tested were that a ''following'' strategy is associated with (1) open habitat, (2) large mothers, and (3) gregariousness. Our new life-history hypothesis is that a following strategy'' is associated with delayed weaning, and thus with the slow end of the slow-fast mammalian life-history continuum, because offspring devote resources to locomotion rather than rapid growth. Our comparative test strongly supports the habitat structure hypothesis and provides some support for this new delayed weaning hypothesis for both ungulates and macropods. We propose that sedentary young in closed habitats benefit energetically by having milk brought to them. In open habitats, predation pressure will select against hiding. Followers will suffer slower growth to independence. Taken together, therefore, our results provide the first quantitative evidence that macropods and ungulates are convergent with respect to interspecific variation in maternal care strategy. In both clades, differences between species in the form of parental care are due to a similar interaction between habitat, social behavior, and life history.
Resumo:
Program slicing is a well known family of techniques used to identify code fragments which depend on or are depended upon specific program entities. They are particularly useful in the areas of reverse engineering, program understanding, testing and software maintenance. Most slicing methods, usually oriented towards the imperative or object paradigms, are based on some sort of graph structure representing program dependencies. Slicing techniques amount, therefore, to (sophisticated) graph transversal algorithms. This paper proposes a completely different approach to the slicing problem for functional programs. Instead of extracting program information to build an underlying dependencies’ structure, we resort to standard program calculation strategies, based on the so-called Bird-Meertens formalism. The slicing criterion is specified either as a projection or a hiding function which, once composed with the original program, leads to the identification of the intended slice. Going through a number of examples, the paper suggests this approach may be an interesting, even if not completely general, alternative to slicing functional programs
Resumo:
Program slicing is a well known family of techniques used to identify code fragments which depend on or are depended upon specific program entities. They are particularly useful in the areas of reverse engineering, program understanding, testing and software maintenance. Most slicing methods, usually targeting either the imperative or the object oriented paradigms, are based on some sort of graph structure representing program dependencies. Slicing techniques amount, therefore, to (sophisticated) graph transversal algorithms. This paper proposes a completely different approach to the slicing problem for functional programs. Instead of extracting program information to build an underlying dependencies’ structure, we resort to standard program calculation strategies, based on the so-called Bird- Meertens formalism. The slicing criterion is specified either as a projection or a hiding function which, once composed with the original program, leads to the identification of the intended slice. Going through a number of examples, the paper suggests this approach may be an interesting, even if not completely general alternative to slicing functional programs
Resumo:
Dissertação apresentada na Faculdade de Ciências e Tecnologia da Universidade Nova de Lisboa para a obtenção do Grau de Mestre em Engenharia Informática
Resumo:
Dissertação apresentada na Faculdade de Ciências e Tecnologia da Universidade Nova de Lisboa para a obtenção do Grau de Mestre em Engenharia Informática
Resumo:
O Geocaching é um jogo, criado pela Groundspeak, que consiste em esconder e encontrar objetos geolocalizados conhecidos como geocaches. A busca das geocaches é na realidade uma aventura que promove a vivência de novas experiências, o convívio entre utilizadores, a descoberta de novos espaços na natureza, a realização de jogos em tempo e cenário real, entre outros. Existem geocaches espalhadas por todo o mundo e milhares de utilizadores estão já registados no jogo. Além de passatempo, o Geocaching consegue ser uma ferramenta de marketing digital, quer para a própria Groundspeak, mas também para diferentes empresas/instituições por todo o mundo normalmente associadas à localização das geocaches. A Groundspeak é, naturalmente, a mais beneficiada uma vez que, praticamente sem investir em publicidade, conseguiu que o jogo tenha cada vez mais adeptos. A sua divulgação é essencialmente feita pelos próprios utilizadores, quer através da comunicação direta com um não utilizador, quer através de redes sociais, de eventos organizados, mas também através de outras empresas que desenvolveram aplicações com funcionalidades extra que permitem ao utilizador uma melhor experiência. O objetivo desta dissertação foi o de demonstrar como é que o Geocaching pode ser usado como uma ferramenta de Marketing Digital. Inicialmente, foi analisada a questão do Marketing Digital e das suas ferramentas, focando o Geocaching e a sua dimensão no mundo, explicando os diferentes tipos de caches e de que forma as mesmas podem ser utilizadas como ferramentas de marketing. Como elemento de validação, foi concebida, desenvolvida e validada uma wherigo (um tipo de geocache), que consiste num jogo virtual onde o progresso do jogador depende das tarefas realizadas e da sua movimentação geolocalizada. A wherigo criada no âmbito do projeto é um meio de marketing digital, de divulgação do Castelo de Santa Maria da Feira, realizada de uma forma interativa e divertida, através de questionários, desafios e fantasia. O jogo incita a percorrer os jardins que rodeiam o Castelo bem como o interior do mesmo e permite ainda o acesso dos jogadores ao Castelo com desconto de geocacher na aquisição do ingresso. Os objetivos propostos inicialmente foram completamente cumpridos, sendo que o jogo já se encontra disponível para ser jogado por geocachers e foi por eles avaliado muito positivamente.
Resumo:
In the last years, volunteers have been contributing massively to what we know nowadays as Volunteered Geographic Information. This huge amount of data might be hiding a vast geographical richness and therefore research needs to be conducted to explore their potential and use it in the solution of real world problems. In this study we conduct an exploratory analysis of data from the OpenStreetMap initiative. Using the Corine Land Cover database as reference and continental Portugal as the study area, we establish a possible correspondence between both classification nomenclatures, evaluate the quality of OpenStreetMap polygon features classification against Corine Land Cover classes from level 1 nomenclature, and analyze the spatial distribution of OpenStreetMap classes over continental Portugal. A global classification accuracy around 76% and interesting coverage areas’ values are remarkable and promising results that encourages us for future research on this topic.
Resumo:
The Intel R Xeon PhiTM is the first processor based on Intel’s MIC (Many Integrated Cores) architecture. It is a co-processor specially tailored for data-parallel computations, whose basic architectural design is similar to the ones of GPUs (Graphics Processing Units), leveraging the use of many integrated low computational cores to perform parallel computations. The main novelty of the MIC architecture, relatively to GPUs, is its compatibility with the Intel x86 architecture. This enables the use of many of the tools commonly available for the parallel programming of x86-based architectures, which may lead to a smaller learning curve. However, programming the Xeon Phi still entails aspects intrinsic to accelerator-based computing, in general, and to the MIC architecture, in particular. In this thesis we advocate the use of algorithmic skeletons for programming the Xeon Phi. Algorithmic skeletons abstract the complexity inherent to parallel programming, hiding details such as resource management, parallel decomposition, inter-execution flow communication, thus removing these concerns from the programmer’s mind. In this context, the goal of the thesis is to lay the foundations for the development of a simple but powerful and efficient skeleton framework for the programming of the Xeon Phi processor. For this purpose we build upon Marrow, an existing framework for the orchestration of OpenCLTM computations in multi-GPU and CPU environments. We extend Marrow to execute both OpenCL and C++ parallel computations on the Xeon Phi. We evaluate the newly developed framework, several well-known benchmarks, like Saxpy and N-Body, will be used to compare, not only its performance to the existing framework when executing on the co-processor, but also to assess the performance on the Xeon Phi versus a multi-GPU environment.
Resumo:
In Ecuadorean Amazonas, Crematogaster ants (Myrmicinae) were observed to construct shelters of debris and plant trichomes covering and hiding extrafloral nectaries of Passiflora auriculata vines. This is seen as an advanced way of excluding competing ants from a food source.