356 resultados para Similarity queries
Resumo:
Predicate encryption (PE) is a new primitive which supports exible control over access to encrypted data. In PE schemes, users' decryption keys are associated with predicates f and ciphertexts encode attributes a that are specified during the encryption procedure. A user can successfully decrypt if and only if f(a) = 1. In this thesis, we will investigate several properties that are crucial to PE. We focus on expressiveness of PE, Revocable PE and Hierarchical PE (HPE) with forward security. For all proposed systems, we provide a security model and analysis using the widely accepted computational complexity approach. Our first contribution is to explore the expressiveness of PE. Existing PE supports a wide class of predicates such as conjunctions of equality, comparison and subset queries, disjunctions of equality queries, and more generally, arbitrary combinations of conjunctive and disjunctive equality queries. We advance PE to evaluate more expressive predicates, e.g., disjunctive comparison or disjunctive subset queries. Such expressiveness is achieved at the cost of computational and space overhead. To improve the performance, we appropriately revise the PE to reduce the computational and space cost. Furthermore, we propose a heuristic method to reduce disjunctions in the predicates. Our schemes are proved in the standard model. We then introduce the concept of Revocable Predicate Encryption (RPE), which extends the previous PE setting with revocation support: private keys can be used to decrypt an RPE ciphertext only if they match the decryption policy (defined via attributes encoded into the ciphertext and predicates associated with private keys) and were not revoked by the time the ciphertext was created. We propose two RPE schemes. Our first scheme, termed Attribute- Hiding RPE (AH-RPE), offers attribute-hiding, which is the standard PE property. Our second scheme, termed Full-Hiding RPE (FH-RPE), offers even stronger privacy guarantees, i.e., apart from possessing the Attribute-Hiding property, the scheme also ensures that no information about revoked users is leaked from a given ciphertext. The proposed schemes are also proved to be secure under well established assumptions in the standard model. Secrecy of decryption keys is an important pre-requisite for security of (H)PE and compromised private keys must be immediately replaced. The notion of Forward Security (FS) reduces damage from compromised keys by guaranteeing confidentiality of messages that were encrypted prior to the compromise event. We present the first Forward-Secure Hierarchical Predicate Encryption (FS-HPE) that is proved secure in the standard model. Our FS-HPE scheme offers some desirable properties: time-independent delegation of predicates (to support dynamic behavior for delegation of decrypting rights to new users), local update for users' private keys (i.e., no master authority needs to be contacted), forward security, and the scheme's encryption process does not require knowledge of predicates at any level including when those predicates join the hierarchy.
Resumo:
Since 1 December 2002, the New Zealand Exchange’s (NZX) continuous disclosure listing rules have operated with statutory backing. To test the effectiveness of the new corporate disclosure regime, we compare the change in quantity of market announcements (overall, non-routine, non-procedural and external) released to the NZX before and after the introduction of statutory backing. We also extend our study in investigating whether the effectiveness of the new corporate disclosure regime is diminished or augmented by corporate governance mechanisms including board size, providing separate roles for CEO and Chairman, board independence, board gender diversity and audit committee independence. Our findings provide a qualified support for the effectiveness of the new corporate disclosure regime regarding the quantity of market disclosures. There is strong evidence that the effectiveness of the new corporate disclosure regime was augmented by providing separate roles for CEO and Chairman, board gender diversity and audit committee independence, and diminished by board size. In addition, there is significant evidence that share price queries do impact corporate disclosure behaviour and this impact is significantly influenced by corporate governance mechanisms. Our findings provide important implications for corporate regulators in their quest for a superior disclosure regime.
Resumo:
Database security techniques are available widely. Among those techniques, the encryption method is a well-certified and established technology for protecting sensitive data. However, once encrypted, the data can no longer be easily queried. The performance of the database depends on how to encrypt the sensitive data, and an approach for searching and retrieval efficiencies that are implemented. In this paper we analyze the database queries and the data properties and propose a suitable mechanism to query the encrypted database. We proposed and analyzed the new database encryption algorithm using the Bloom Filter with the bucket index method. Finally, we demonstrated the superiority of the proposed algorithm through several experiments that should be useful for database encryption related research and application activities.
Resumo:
The rapid growth of visual information on Web has led to immense interest in multimedia information retrieval (MIR). While advancement in MIR systems has achieved some success in specific domains, particularly the content-based approaches, general Web users still struggle to find the images they want. Despite the success in content-based object recognition or concept extraction, the major problem in current Web image searching remains in the querying process. Since most online users only express their needs in semantic terms or objects, systems that utilize visual features (e.g., color or texture) to search images create a semantic gap which hinders general users from fully expressing their needs. In addition, query-by-example (QBE) retrieval imposes extra obstacles for exploratory search because users may not always have the representative image at hand or in mind when starting a search (i.e. the page zero problem). As a result, the majority of current online image search engines (e.g., Google, Yahoo, and Flickr) still primarily use textual queries to search. The problem with query-based retrieval systems is that they only capture users’ information need in terms of formal queries;; the implicit and abstract parts of users’ information needs are inevitably overlooked. Hence, users often struggle to formulate queries that best represent their needs, and some compromises have to be made. Studies of Web search logs suggest that multimedia searches are more difficult than textual Web searches, and Web image searching is the most difficult compared to video or audio searches. Hence, online users need to put in more effort when searching multimedia contents, especially for image searches. Most interactions in Web image searching occur during query reformulation. While log analysis provides intriguing views on how the majority of users search, their search needs or motivations are ultimately neglected. User studies on image searching have attempted to understand users’ search contexts in terms of users’ background (e.g., knowledge, profession, motivation for search and task types) and the search outcomes (e.g., use of retrieved images, search performance). However, these studies typically focused on particular domains with a selective group of professional users. General users’ Web image searching contexts and behaviors are little understood although they represent the majority of online image searching activities nowadays. We argue that only by understanding Web image users’ contexts can the current Web search engines further improve their usefulness and provide more efficient searches. In order to understand users’ search contexts, a user study was conducted based on university students’ Web image searching in News, Travel, and commercial Product domains. The three search domains were deliberately chosen to reflect image users’ interests in people, time, event, location, and objects. We investigated participants’ Web image searching behavior, with the focus on query reformulation and search strategies. Participants’ search contexts such as their search background, motivation for search, and search outcomes were gathered by questionnaires. The searching activity was recorded with participants’ think aloud data for analyzing significant search patterns. The relationships between participants’ search contexts and corresponding search strategies were discovered by Grounded Theory approach. Our key findings include the following aspects: - Effects of users' interactive intents on query reformulation patterns and search strategies - Effects of task domain on task specificity and task difficulty, as well as on some specific searching behaviors - Effects of searching experience on result expansion strategies A contextual image searching model was constructed based on these findings. The model helped us understand Web image searching from user perspective, and introduced a context-aware searching paradigm for current retrieval systems. A query recommendation tool was also developed to demonstrate how users’ query reformulation contexts can potentially contribute to more efficient searching.
Resumo:
A user’s query is considered to be an imprecise description of their information need. Automatic query expansion is the process of reformulating the original query with the goal of improving retrieval effectiveness. Many successful query expansion techniques ignore information about the dependencies that exist between words in natural language. However, more recent approaches have demonstrated that by explicitly modeling associations between terms significant improvements in retrieval effectiveness can be achieved over those that ignore these dependencies. State-of-the-art dependency-based approaches have been shown to primarily model syntagmatic associations. Syntagmatic associations infer a likelihood that two terms co-occur more often than by chance. However, structural linguistics relies on both syntagmatic and paradigmatic associations to deduce the meaning of a word. Given the success of dependency-based approaches and the reliance on word meanings in the query formulation process, we argue that modeling both syntagmatic and paradigmatic information in the query expansion process will improve retrieval effectiveness. This article develops and evaluates a new query expansion technique that is based on a formal, corpus-based model of word meaning that models syntagmatic and paradigmatic associations. We demonstrate that when sufficient statistical information exists, as in the case of longer queries, including paradigmatic information alone provides significant improvements in retrieval effectiveness across a wide variety of data sets. More generally, when our new query expansion approach is applied to large-scale web retrieval it demonstrates significant improvements in retrieval effectiveness over a strong baseline system, based on a commercial search engine.
Resumo:
The degree of diversity or similarity detected in comets depends primarily on the lifetimes of the individual cometary nuclei at the time of analysis. It is inherent in our understanding of cometary orbital dynamics and the seminal model of comet origins that cometary evolution is the natural order of events in our Solar System. Thus, predictions of cometary behaviour in terms of bulk physical, mineralogical or chemical parameters should contain an appreciation of temporal variation(s). Previously, Rietmeijer and Mackinnon [1987] developed mineralogical bases for the chemical evolution of cometary nuclei primarily with regard to the predominantly silicate fraction of comet nuclei. We suggested that alteration of solids in cometary nuclei should be expected and that indications of likely reactants and products can be derived from judicious comparison with terrestrial diagenetic environments which include hydrocryogenic and low-temperature aqueous alterations. In a further development of this concept, Rietmeijer [1988] provides indirect evidence for the formation of sulfides and oxides in comet nuclei. Furthermore, Rietmeijer [1988] noted that timescales for hydrocryogenic and low-temperature reactions involving liquid water are probably adequate for relatively mature comets, e.g. P/comet Halley. In this paper, we will address the evolution of comet nuclei physical parameters such as solid particle grain size, porosity and density. In natural environments, chemical evolution (e.g. mineral reactions) is often accompanied by changes in physical properties. These concurrent changes are well-documented in the terrestrial geological literature, especially in studies of sediment diagenesis and we suggest that similar basic principles apply within the upper few meters of active comet nuclei. The database for prediction of comet nuclei physical parameters is, in principle, the same as used for the proposition of chemical evolution. We use detailed mineralogical studies of chondritic interplanetary dust particles (IDPS) as a guide to the likely constitution of mature comets traversing the inner Solar System. While there is, as yet, no direct proof that a specific sub-group or type of chondritic IDP is derived from a specific comet, it is clear that these particles are extraterrestrial in origin and that a certain portion of the interplanetary flux received by the Earth is cometary in origin. Two chondritic porous (CP) IDPS, sample numbers W701OA2 and W7029CI, from the Johnson Space Center Cosmic Dust Collection have been selected for this study of putative cometary physical parameters. This particular type of particle is considered a likely candidate for a cometary origin on the basis of mineralogy, bulk composition and morphology. While many IDPs have been subjected to intensive study over the past decade, we can develop a physical parameter model on only these two CP IDPs because few others have been studied in sufficient detail.
Resumo:
An estuary is formed at the mouth of a river where the tides meet a freshwater flow and it may be classified as a function of the salinity distribution and density stratification. An overview of the broad characteristics of the estuaries of South-East Queensland(Australia) is presented herein, where the small peri-urban estuaries may provide an useful indicator of potential changes which might occur in larger systems with growing urbanisation. Small peri-urban estuaries exhibits many key hydrological features and associated with ecosystem types of larger estuaries, albeit at smaller scales, often with a greater extent of urban development as a proportion of catchment area. We explore the potential for some smaller peri-urban estuaries to be used as natural laboratories to gain some much needed information on the estuarine processes, although any dynamics similarity is presently limited by critical absence of in-depth physical investigation in larger estuarine systems. The absence of the detailed turbulence and sedimentary data hampers the understanding and modelling of the estuarine zones. The interactions between the various stake holders are likely to define the vision for the future of South-East Queensland's peri-urban estuaries. This will require a solid understanding of the bio-physical function and capacity of the peri-urban estuaries. Based upon the knowledge gap, it is recommended that an adaptive trial and error approach be adopted for the future of investigation and management strategies.
Resumo:
Process-aware information systems (PAISs) can be configured using a reference process model, which is typically obtained via expert interviews. Over time, however, contextual factors and system requirements may cause the operational process to start deviating from this reference model. While a reference model should ideally be updated to remain aligned with such changes, this is a costly and often neglected activity. We present a new process mining technique that automatically improves the reference model on the basis of the observed behavior as recorded in the event logs of a PAIS. We discuss how to balance the four basic quality dimensions for process mining (fitness, precision, simplicity and generalization) and a new dimension, namely the structural similarity between the reference model and the discovered model. We demonstrate the applicability of this technique using a real-life scenario from a Dutch municipality.
Resumo:
Predicate encryption is a new primitive that supports flexible control over access to encrypted data. We study predicate encryption systems, evaluating a wide class of predicates. Our systems are more expressive than the existing attribute-hiding systems in the sense that the proposed constructions support not only all existing predicate evaluations but also arbitrary conjunctions and disjunctions of comparison and subset queries. Toward our goal, we propose encryption schemes supporting multi-inner-product predicate and provide formal security analysis. We show how to apply the proposed schemes to achieve all those predicate evaluations.
Resumo:
Most security models for authenticated key exchange (AKE) do not explicitly model the associated certification system, which includes the certification authority (CA) and its behaviour. However, there are several well-known and realistic attacks on AKE protocols which exploit various forms of malicious key registration and which therefore lie outside the scope of these models. We provide the first systematic analysis of AKE security incorporating certification systems (ASICS). We define a family of security models that, in addition to allowing different sets of standard AKE adversary queries, also permit the adversary to register arbitrary bitstrings as keys. For this model family we prove generic results that enable the design and verification of protocols that achieve security even if some keys have been produced maliciously. Our approach is applicable to a wide range of models and protocols; as a concrete illustration of its power, we apply it to the CMQV protocol in the natural strengthening of the eCK model to the ASICS setting.
Resumo:
With the increasing popularity and adoption of building information modeling (BIM), the amount of digital information available about a building is overwhelming. Enormous challenges remain however in identifying meaningful and required information from a complex BIM model to support a particular construction management (CM) task. Detailed specifications of information required by different construction domains and expressive and easy-to-use BIM reasoning mechanisms are seen as an important means in addressing these challenges. This paper analyzes some of the characteristics and requirements of component-specific construction knowledge in relation to the current work practice and BIM-based applications. It is argued that domain ontologies and information extraction approaches, such as queries could significantly bring much needed support for knowledge sharing and integration of information between design, construction and facility management.
Resumo:
Purpose: Flat-detector, cone-beam computed tomography (CBCT) has enormous potential to improve the accuracy of treatment delivery in image-guided radiotherapy (IGRT). To assist radiotherapists in interpreting these images, we use a Bayesian statistical model to label each voxel according to its tissue type. Methods: The rich sources of prior information in IGRT are incorporated into a hidden Markov random field (MRF) model of the 3D image lattice. Tissue densities in the reference CT scan are estimated using inverse regression and then rescaled to approximate the corresponding CBCT intensity values. The treatment planning contours are combined with published studies of physiological variability to produce a spatial prior distribution for changes in the size, shape and position of the tumour volume and organs at risk (OAR). The voxel labels are estimated using the iterated conditional modes (ICM) algorithm. Results: The accuracy of the method has been evaluated using 27 CBCT scans of an electron density phantom (CIRS, Inc. model 062). The mean voxel-wise misclassification rate was 6.2%, with Dice similarity coefficient of 0.73 for liver, muscle, breast and adipose tissue. Conclusions: By incorporating prior information, we are able to successfully segment CBCT images. This could be a viable approach for automated, online image analysis in radiotherapy.
Resumo:
Democratic governments raise taxes and charges and spend revenue on delivering peace, order and good government. The delivery process begins with a legislature as that can provide a framework of legally enforceable rules enacted according to the government’s constitution. These rules confer rights and obligations that allow particular people to carry on particular functions at particular places and times. Metadata standards as applied to public records contain information about the functioning of government as distinct from the non-government sector of society. Metadata standards apply to database construction. Data entry, storage, maintenance, interrogation and retrieval depend on a controlled vocabulary needed to enable accurate retrieval of suitably catalogued records in a global information environment. Queensland’s socioeconomic progress now depends in part on technical efficiency in database construction to address queries about who does what, where and when; under what legally enforceable authority; and how the evidence of those facts is recorded. The Survey and Mapping Infrastructure Act 2003 (Qld) addresses technical aspects of where questions – typically the officially recognised name of a place and a description of its boundaries. The current 10-year review of the Survey and Mapping Regulation 2004 provides a valuable opportunity to consider whether the Regulation makes sense in the context of a number of later laws concerned with management of Public Sector Information (PSI) as well as policies for ICT hardware and software procurement. Removing ambiguities about how official place names are to be regarded on a whole-of-government basis can achieve some short term goals. Longer-term goals depend on a more holistic approach to information management – and current aspirations for more open government and community engagement are unlikely to occur without such a longer-term vision.
Resumo:
This paper details the participation of the Australian e- Health Research Centre (AEHRC) in the ShARe/CLEF 2013 eHealth Evaluation Lab { Task 3. This task aims to evaluate the use of information retrieval (IR) systems to aid consumers (e.g. patients and their relatives) in seeking health advice on the Web. Our submissions to the ShARe/CLEF challenge are based on language models generated from the web corpus provided by the organisers. Our baseline system is a standard Dirichlet smoothed language model. We enhance the baseline by identifying and correcting spelling mistakes in queries, as well as expanding acronyms using AEHRC's Medtex medical text analysis platform. We then consider the readability and the authoritativeness of web pages to further enhance the quality of the document ranking. Measures of readability are integrated in the language models used for retrieval via prior probabilities. Prior probabilities are also used to encode authoritativeness information derived from a list of top-100 consumer health websites. Empirical results show that correcting spelling mistakes and expanding acronyms found in queries signi cantly improves the e ectiveness of the language model baseline. Readability priors seem to increase retrieval e ectiveness for graded relevance at early ranks (nDCG@5, but not precision), but no improvements are found at later ranks and when considering binary relevance. The authoritativeness prior does not appear to provide retrieval gains over the baseline: this is likely to be because of the small overlap between websites in the corpus and those in the top-100 consumer-health websites we acquired.
Resumo:
Mathematical models of mosquito-borne pathogen transmission originated in the early twentieth century to provide insights into how to most effectively combat malaria. The foundations of the Ross–Macdonald theory were established by 1970. Since then, there has been a growing interest in reducing the public health burden of mosquito-borne pathogens and an expanding use of models to guide their control. To assess how theory has changed to confront evolving public health challenges, we compiled a bibliography of 325 publications from 1970 through 2010 that included at least one mathematical model of mosquito-borne pathogen transmission and then used a 79-part questionnaire to classify each of 388 associated models according to its biological assumptions. As a composite measure to interpret the multidimensional results of our survey, we assigned a numerical value to each model that measured its similarity to 15 core assumptions of the Ross–Macdonald model. Although the analysis illustrated a growing acknowledgement of geographical, ecological and epidemiological complexities in modelling transmission, most models during the past 40 years closely resemble the Ross–Macdonald model. Modern theory would benefit from an expansion around the concepts of heterogeneous mosquito biting, poorly mixed mosquito-host encounters, spatial heterogeneity and temporal variation in the transmission process.