948 resultados para Information Model
Resumo:
The integration of separate, yet complimentary, cortical pathways appears to play a role in visual perception and action when intercepting objects. The ventral system is responsible for object recognition and identification, while the dorsal system facilitates continuous regulation of action. This dual-system model implies that empirically manipulating different visual information sources during performance of an interceptive action might lead to the emergence of distinct gaze and movement pattern profiles. To test this idea, we recorded hand kinematics and eye movements of participants as they attempted to catch balls projected from a novel apparatus that synchronised or de-synchronised accompanying video images of a throwing action and ball trajectory. Results revealed that ball catching performance was less successful when patterns of hand movements and gaze behaviours were constrained by the absence of advanced perceptual information from the thrower's actions. Under these task constraints, participants began tracking the ball later, followed less of its trajectory, and adapted their actions by initiating movements later and moving the hand faster. There were no performance differences when the throwing action image and ball speed were synchronised or de-synchronised since hand movements were closely linked to information from ball trajectory. Results are interpreted relative to the two-visual system hypothesis, demonstrating that accurate interception requires integration of advanced visual information from kinematics of the throwing action and from ball flight trajectory.
Resumo:
A long query provides more useful hints for searching relevant documents, but it is likely to introduce noise which affects retrieval performance. In order to smooth such adverse effect, it is important to reduce noisy terms, introduce and boost additional relevant terms. This paper presents a comprehensive framework, called Aspect Hidden Markov Model (AHMM), which integrates query reduction and expansion, for retrieval with long queries. It optimizes the probability distribution of query terms by utilizing intra-query term dependencies as well as the relationships between query terms and words observed in relevance feedback documents. Empirical evaluation on three large-scale TREC collections demonstrates that our approach, which is automatic, achieves salient improvements over various strong baselines, and also reaches a comparable performance to a state of the art method based on user’s interactive query term reduction and expansion.
Resumo:
Too often the relationship between client and external consultants is perceived as one of protagonist versus antogonist. Stories on dramatic, failed consultancies abound, as do related anecdotal quips. A contributing factor to many "apparently" failed consultancies is a poor appreciation by both the client and consultant of the client's true goals for the project and how to assess progress toward these goals. This paper presents and analyses a measurement model for assessing client success when engaging an external consultant. Three main areas of assessment are identified: (1) the consultant;s recommendations, (2) client learning, and (3) consultant performance. Engagement success is emperically measured along these dimensions through a series of case studies and a subsequent survey of clients and consultants involved in 85 computer-based information system selection projects. Validation fo the model constructs suggests the existence of six distinct and individually important dimensions of engagement success. both clients and consultants are encouraged to attend to these dimensions in pre-engagement proposal and selection processes, and post-engagement evaluation of outcomes.
Resumo:
In this paper we introduce a formalization of Logical Imaging applied to IR in terms of Quantum Theory through the use of an analogy between states of a quantum system and terms in text documents. Our formalization relies upon the Schrodinger Picture, creating an analogy between the dynamics of a physical system and the kinematics of probabilities generated by Logical Imaging. By using Quantum Theory, it is possible to model more precisely contextual information in a seamless and principled fashion within the Logical Imaging process. While further work is needed to empirically validate this, the foundations for doing so are provided.
Resumo:
Retrieval with Logical Imaging is derived from belief revision and provides a novel mechanism for estimating the relevance of a document through logical implication (i.e. P(q -> d)). In this poster, we perform the first comprehensive evaluation of Logical Imaging (LI) in Information Retrieval (IR) across several TREC test Collections. When compared against standard baseline models, we show that LI fails to improve performance. This failure can be attributed to a nuance within the model that means non-relevant documents are promoted in the ranking, while relevant documents are demoted. This is an important contribution because it not only contextualizes the effectiveness of LI, but crucially ex- plains why it fails. By addressing this nuance, future LI models could be significantly improved.
Resumo:
Social tagging systems are shown to evidence a well known cognitive heuristic, the guppy effect, which arises from the combination of different concepts. We present some empirical evidence of this effect, drawn from a popular social tagging Web service. The guppy effect is then described using a quantum inspired formalism that has been already successfully applied to model conjunction fallacy and probability judgement errors. Key to the formalism is the concept of interference, which is able to capture and quantify the strength of the guppy effect.
Resumo:
The notion of certificateless public-key encryption (CL-PKE) was introduced by Al-Riyami and Paterson in 2003 that avoids the drawbacks of both traditional PKI-based public-key encryption (i.e., establishing public-key infrastructure) and identity-based encryption (i.e., key escrow). So CL-PKE like identity-based encryption is certificate-free, and unlike identity-based encryption is key escrow-free. In this paper, we introduce simple and efficient CCA-secure CL-PKE based on (hierarchical) identity-based encryption. Our construction has both theoretical and practical interests. First, our generic transformation gives a new way of constructing CCA-secure CL-PKE. Second, instantiating our transformation using lattice-based primitives results in a more efficient CCA-secure CL-PKE than its counterpart introduced by Dent in 2008.
Resumo:
Complex numbers are a fundamental aspect of the mathematical formalism of quantum physics. Quantum-like models developed outside physics often overlooked the role of complex numbers. Specifically, previous models in Information Retrieval (IR) ignored complex numbers. We argue that to advance the use of quantum models of IR, one has to lift the constraint of real-valued representations of the information space, and package more information within the representation by means of complex numbers. As a first attempt, we propose a complex-valued representation for IR, which explicitly uses complex valued Hilbert spaces, and thus where terms, documents and queries are represented as complex-valued vectors. The proposal consists of integrating distributional semantics evidence within the real component of a term vector; whereas, ontological information is encoded in the imaginary component. Our proposal has the merit of lifting the role of complex numbers from a computational byproduct of the model to the very mathematical texture that unifies different levels of semantic information. An empirical instantiation of our proposal is tested in the TREC Medical Record task of retrieving cohorts for clinical studies.
Resumo:
Numeric set watermarking is a way to provide ownership proof for numerical data. Numerical data can be considered to be primitives for multimedia types such as images and videos since they are organized forms of numeric information. Thereby, the capability to watermark numerical data directly implies the capability to watermark multimedia objects and discourage information theft on social networking sites and the Internet in general. Unfortunately, there has been very limited research done in the field of numeric set watermarking due to underlying limitations in terms of number of items in the set and LSBs in each item available for watermarking. In 2009, Gupta et al. proposed a numeric set watermarking model that embeds watermark bits in the items of the set based on a hash value of the items’ most significant bits (MSBs). If an item is chosen for watermarking, a watermark bit is embedded in the least significant bits, and the replaced bit is inserted in the fractional value to provide reversibility. The authors show their scheme to be resilient against the traditional subset addition, deletion, and modification attacks as well as secondary watermarking attacks. In this paper, we present a bucket attack on this watermarking model. The attack consists of creating buckets of items with the same MSBs and determine if the items of the bucket carry watermark bits. Experimental results show that the bucket attack is very strong and destroys the entire watermark with close to 100% success rate. We examine the inherent weaknesses in the watermarking model of Gupta et al. that leave it vulnerable to the bucket attack and propose potential safeguards that can provide resilience against this attack.
Resumo:
This thesis presents novel techniques for addressing the problems of continuous change and inconsistencies in large process model collections. The developed techniques treat process models as a collection of fragments and facilitate version control, standardization and automated process model discovery using fragment-based concepts. Experimental results show that the presented techniques are beneficial in consolidating large process model collections, specifically when there is a high degree of redundancy.
Resumo:
Spatially-explicit modelling of grassland classes is important to site-specific planning for improving grassland and environmental management over large areas. In this study, a climate-based grassland classification model, the Comprehensive and Sequential Classification System (CSCS) was integrated with spatially interpolated climate data to classify grassland in Gansu province, China. The study area is characterized by complex topographic features imposed by plateaus, high mountains, basins and deserts. To improve the quality of the interpolated climate data and the quality of the spatial classification over this complex topography, three linear regression methods, namely an analytic method based on multiple regression and residues (AMMRR), a modification of the AMMRR method through adding the effect of slope and aspect to the interpolation analysis (M-AMMRR) and a method which replaces the IDW approach for residue interpolation in M-AMMRR with an ordinary kriging approach (I-AMMRR), for interpolating climate variables were evaluated. The interpolation outcomes from the best interpolation method were then used in the CSCS model to classify the grassland in the study area. Climate variables interpolated included the annual cumulative temperature and annual total precipitation. The results indicated that the AMMRR and M-AMMRR methods generated acceptable climate surfaces but the best model fit and cross validation result were achieved by the I-AMMRR method. Twenty-six grassland classes were classified for the study area. The four grassland vegetation classes that covered more than half of the total study area were "cool temperate-arid temperate zonal semi-desert", "cool temperate-humid forest steppe and deciduous broad-leaved forest", "temperate-extra-arid temperate zonal desert", and "frigid per-humid rain tundra and alpine meadow". The vegetation classification map generated in this study provides spatial information on the locations and extents of the different grassland classes. This information can be used to facilitate government agencies' decision-making in land-use planning and environmental management, and for vegetation and biodiversity conservation. The information can also be used to assist land managers in the estimation of safe carrying capacities which will help to prevent overgrazing and land degradation.
Resumo:
In the current business world which companies’ competition is very compact in the business arena, quality in manufacturing and providing products and services can be considered as a means of seeking excellence and success of companies in this competition arena. Entering the era of e-commerce and emergence of new production systems and new organizational structures, traditional management and quality assurance systems have been challenged. Consequently, quality information system has been gained a special seat as one of the new tools of quality management. In this paper, quality information system has been studied with a review of the literature of the quality information system, and the role and position of quality Information System (QIS) among other information systems of a organization is investigated. The quality Information system models are analyzed and by analyzing and assessing presented models in quality information system a conceptual and hierarchical model of quality information system is suggested and studied. As a case study the hierarchical model of quality information system is developed by evaluating hierarchical models presented in the field of quality information system based on the Shetabkar Co.
Resumo:
Parabolic trough concentrator collector is the most matured, proven and widespread technology for the exploitation of the solar energy on a large scale for middle temperature applications. The assessment of the opportunities and the possibilities of the collector system are relied on its optical performance. A reliable Monte Carlo ray tracing model of a parabolic trough collector is developed by using Zemax software. The optical performance of an ideal collector depends on the solar spectral distribution and the sunshape, and the spectral selectivity of the associated components. Therefore, each step of the model, including the spectral distribution of the solar energy, trough reflectance, glazing anti-reflection coating and the absorber selective coating is explained and verified. Radiation flux distribution around the receiver, and the optical efficiency are two basic aspects of optical simulation are calculated using the model, and verified with widely accepted analytical profile and measured values respectively. Reasonably very good agreement is obtained. Further investigations are carried out to analyse the characteristics of radiation distribution around the receiver tube at different insolation, envelop conditions, and selective coating on the receiver; and the impact of scattered light from the receiver surface on the efficiency. However, the model has the capability to analyse the optical performance at variable sunshape, tracking error, collector imperfections including absorber misalignment with focal line and de-focal effect of the absorber, different rim angles, and geometric concentrations. The current optical model can play a significant role in understanding the optical aspects of a trough collector, and can be employed to extract useful information on the optical performance. In the long run, this optical model will pave the way for the construction of low cost standalone photovoltaic and thermal hybrid collector in Australia for small scale domestic hot water and electricity production.
Resumo:
This paper presents ongoing work toward constructing efficient completely non-malleable public-key encryption scheme based on lattices in the standard (common reference string) model. An encryption scheme is completely non-malleable if it requires attackers to have negligible advantage, even if they are allowed to transform the public key under which the related message is encrypted. Ventre and Visconti proposed two inefficient constructions of completely non-malleable schemes, one in the common reference string model using non-interactive zero-knowledge proofs, and another using interactive encryption schemes. Recently, two efficient public-key encryption schemes have been proposed, both of them are based on pairing identity-based encryption.