916 resultados para Rough Set
Resumo:
Over the last decade, the majority of existing search techniques is either keyword- based or category-based, resulting in unsatisfactory effectiveness. Meanwhile, studies have illustrated that more than 80% of users preferred personalized search results. As a result, many studies paid a great deal of efforts (referred to as col- laborative filtering) investigating on personalized notions for enhancing retrieval performance. One of the fundamental yet most challenging steps is to capture precise user information needs. Most Web users are inexperienced or lack the capability to express their needs properly, whereas the existent retrieval systems are highly sensitive to vocabulary. Researchers have increasingly proposed the utilization of ontology-based tech- niques to improve current mining approaches. The related techniques are not only able to refine search intentions among specific generic domains, but also to access new knowledge by tracking semantic relations. In recent years, some researchers have attempted to build ontological user profiles according to discovered user background knowledge. The knowledge is considered to be both global and lo- cal analyses, which aim to produce tailored ontologies by a group of concepts. However, a key problem here that has not been addressed is: how to accurately match diverse local information to universal global knowledge. This research conducts a theoretical study on the use of personalized ontolo- gies to enhance text mining performance. The objective is to understand user information needs by a \bag-of-concepts" rather than \words". The concepts are gathered from a general world knowledge base named the Library of Congress Subject Headings. To return desirable search results, a novel ontology-based mining approach is introduced to discover accurate search intentions and learn personalized ontologies as user profiles. The approach can not only pinpoint users' individual intentions in a rough hierarchical structure, but can also in- terpret their needs by a set of acknowledged concepts. Along with global and local analyses, another solid concept matching approach is carried out to address about the mismatch between local information and world knowledge. Relevance features produced by the Relevance Feature Discovery model, are determined as representatives of local information. These features have been proven as the best alternative for user queries to avoid ambiguity and consistently outperform the features extracted by other filtering models. The two attempt-to-proposed ap- proaches are both evaluated by a scientific evaluation with the standard Reuters Corpus Volume 1 testing set. A comprehensive comparison is made with a num- ber of the state-of-the art baseline models, including TF-IDF, Rocchio, Okapi BM25, the deploying Pattern Taxonomy Model, and an ontology-based model. The gathered results indicate that the top precision can be improved remarkably with the proposed ontology mining approach, where the matching approach is successful and achieves significant improvements in most information filtering measurements. This research contributes to the fields of ontological filtering, user profiling, and knowledge representation. The related outputs are critical when systems are expected to return proper mining results and provide personalized services. The scientific findings have the potential to facilitate the design of advanced preference mining models, where impact on people's daily lives.
Resumo:
In Thomas Mann’s tetralogy of the 1930s and 1940s, Joseph and His Brothers, the narrator declares history is not only “that which has happened and that which goes on happening in time,” but it is also “the stratified record upon which we set our feet, the ground beneath us.” By opening up history to its spatial, geographical, and geological dimensions Mann both predicts and encapsulates the twentieth-century’s “spatial turn,” a critical shift that divested geography of its largely passive role as history’s “stage” and brought to the fore intersections between the humanities and the earth sciences. In this paper, I draw out the relationships between history, narrative, geography, and geology revealed by this spatial turn and the questions these pose for thinking about the disciplinary relationship between geography and the humanities. As Mann’s statement exemplifies, the spatial turn itself has often been captured most strikingly in fiction, and I would argue nowhere more so than in Graham Swift’s Waterland (1983) and Anne Michaels’s Fugitive Pieces (1996), both of which present space, place, and landscape as having a palpable influence on history and memory. The geographical/geological line that runs through both Waterland and Fugitive Pieces continues through Tim Robinson’s non-fictional, two-volume “topographical” history Stones of Aran. Robinson’s Stones of Aran—which is not history, not geography, and not literature, and yet is all three—constructs an imaginative geography that renders inseparable geography, geology, history, memory, and the act of writing.
Resumo:
Numeric set watermarking is a way to provide ownership proof for numerical data. Numerical data can be considered to be primitives for multimedia types such as images and videos since they are organized forms of numeric information. Thereby, the capability to watermark numerical data directly implies the capability to watermark multimedia objects and discourage information theft on social networking sites and the Internet in general. Unfortunately, there has been very limited research done in the field of numeric set watermarking due to underlying limitations in terms of number of items in the set and LSBs in each item available for watermarking. In 2009, Gupta et al. proposed a numeric set watermarking model that embeds watermark bits in the items of the set based on a hash value of the items’ most significant bits (MSBs). If an item is chosen for watermarking, a watermark bit is embedded in the least significant bits, and the replaced bit is inserted in the fractional value to provide reversibility. The authors show their scheme to be resilient against the traditional subset addition, deletion, and modification attacks as well as secondary watermarking attacks. In this paper, we present a bucket attack on this watermarking model. The attack consists of creating buckets of items with the same MSBs and determine if the items of the bucket carry watermark bits. Experimental results show that the bucket attack is very strong and destroys the entire watermark with close to 100% success rate. We examine the inherent weaknesses in the watermarking model of Gupta et al. that leave it vulnerable to the bucket attack and propose potential safeguards that can provide resilience against this attack.
Resumo:
Motivated by the need of private set operations in a distributed environment, we extend the two-party private matching problem proposed by Freedman, Nissim and Pinkas (FNP) at Eurocrypt’04 to the distributed setting. By using a secret sharing scheme, we provide a distributed solution of the FNP private matching called the distributed private matching. In our distributed private matching scheme, we use a polynomial to represent one party’s dataset as in FNP and then distribute the polynomial to multiple servers. We extend our solution to the distributed set intersection and the cardinality of the intersection, and further we show how to apply the distributed private matching in order to compute distributed subset relation. Our work extends the primitives of private matching and set intersection by Freedman et al. Our distributed construction might be of great value when the dataset is outsourced and its privacy is the main concern. In such cases, our distributed solutions keep the utility of those set operations while the dataset privacy is not compromised. Comparing with previous works, we achieve a more efficient solution in terms of computation. All protocols constructed in this paper are provably secure against a semi-honest adversary under the Decisional Diffie-Hellman assumption.
Resumo:
This paper proposes a method for designing set-point regulation controllers for a class of underactuated mechanical systems in Port-Hamiltonian System (PHS) form. A new set of potential shape variables in closed loop is proposed, which can replace the set of open loop shape variables-the configuration variables that appear in the kinetic energy. With this choice, the closed-loop potential energy contains free functions of the new variables. By expressing the regulation objective in terms of these new potential shape variables, the desired equilibrium can be assigned and there is freedom to reshape the potential energy to achieve performance whilst maintaining the PHS form in closed loop. This complements contemporary results in the literature, which preserve the open-loop shape variables. As a case study, we consider a robotic manipulator mounted on a flexible base and compensate for the motion of the base while positioning the end effector with respect to the ground reference. We compare the proposed control strategy with special cases that correspond to other energy shaping strategies previously proposed in the literature.
Resumo:
Background Flavonoids such as anthocyanins, flavonols and proanthocyanidins, play a central role in fruit colour, flavour and health attributes. In peach and nectarine (Prunus persica) these compounds vary during fruit growth and ripening. Flavonoids are produced by a well studied pathway which is transcriptionally regulated by members of the MYB and bHLH transcription factor families. We have isolated nectarine flavonoid regulating genes and examined their expression patterns, which suggests a critical role in the regulation of flavonoid biosynthesis. Results In nectarine, expression of the genes encoding enzymes of the flavonoid pathway correlated with the concentration of proanthocyanidins, which strongly increases at mid-development. In contrast, the only gene which showed a similar pattern to anthocyanin concentration was UDP-glucose-flavonoid-3-O-glucosyltransferase (UFGT), which was high at the beginning and end of fruit growth, remaining low during the other developmental stages. Expression of flavonol synthase (FLS1) correlated with flavonol levels, both temporally and in a tissue specific manner. The pattern of UFGT gene expression may be explained by the involvement of different transcription factors, which up-regulate flavonoid biosynthesis (MYB10, MYB123, and bHLH3), or repress (MYB111 and MYB16) the transcription of the biosynthetic genes. The expression of a potential proanthocyanidin-regulating transcription factor, MYBPA1, corresponded with proanthocyanidin levels. Functional assays of these transcription factors were used to test the specificity for flavonoid regulation. Conclusions MYB10 positively regulates the promoters of UFGT and dihydroflavonol 4-reductase (DFR) but not leucoanthocyanidin reductase (LAR). In contrast, MYBPA1 trans-activates the promoters of DFR and LAR, but not UFGT. This suggests exclusive roles of anthocyanin regulation by MYB10 and proanthocyanidin regulation by MYBPA1. Further, these transcription factors appeared to be responsive to both developmental and environmental stimuli.
Resumo:
There’s a diagram that does the rounds online that neatly sums up the difference between the quality of equipment used in the studio to produce music, and the quality of the listening equipment used by the consumer...
Resumo:
Existing multi-model approaches for image set classification extract local models by clustering each image set individually only once, with fixed clusters used for matching with other image sets. However, this may result in the two closest clusters to represent different characteristics of an object, due to different undesirable environmental conditions (such as variations in illumination and pose). To address this problem, we propose to constrain the clustering of each query image set by forcing the clusters to have resemblance to the clusters in the gallery image sets. We first define a Frobenius norm distance between subspaces over Grassmann manifolds based on reconstruction error. We then extract local linear subspaces from a gallery image set via sparse representation. For each local linear subspace, we adaptively construct the corresponding closest subspace from the samples of a probe image set by joint sparse representation. We show that by minimising the sparse representation reconstruction error, we approach the nearest point on a Grassmann manifold. Experiments on Honda, ETH-80 and Cambridge-Gesture datasets show that the proposed method consistently outperforms several other recent techniques, such as Affine Hull based Image Set Distance (AHISD), Sparse Approximated Nearest Points (SANP) and Manifold Discriminant Analysis (MDA).
Resumo:
In CB Richard Ellis (C) Pty Ltd v Wingate Properties Pty Ltd [2005] QDC 399 McGill DCJ examined whether the court now has a discretion to set aside an irregularly entered default judgment.
Resumo:
In C & E Pty Ltd v Corrigan [2006] QCA 47, the Queensland Court of Appeal considered whether r103 of the Uniform Civil Procedure Rules applied to the service of an application to set aside a statutory demand under s459G of the Corporations Act 2001 (Cth). The decision provides analysis and clarification of an issue that has clearly been one of some uncertainty.
Resumo:
Numeric sets can be used to store and distribute important information such as currency exchange rates and stock forecasts. It is useful to watermark such data for proving ownership in case of illegal distribution by someone. This paper analyzes the numerical set watermarking model presented by Sion et. al in “On watermarking numeric sets”, identifies it’s weaknesses, and proposes a novel scheme that overcomes these problems. One of the weaknesses of Sion’s watermarking scheme is the requirement to have a normally-distributed set, which is not true for many numeric sets such as forecast figures. Experiments indicate that the scheme is also susceptible to subset addition and secondary watermarking attacks. The watermarking model we propose can be used for numeric sets with arbitrary distribution. Theoretical analysis and experimental results show that the scheme is strongly resilient against sorting, subset selection, subset addition, distortion, and secondary watermarking attacks.
Resumo:
Ever since Cox et. al published their paper, “A Secure, Robust Watermark for Multimedia” in 1996 [6], there has been tremendous progress in multimedia watermarking. The same pattern re-emerged with Agrawal and Kiernan publishing their work “Watermarking Relational Databases” in 2001 [1]. However, little attention has been given to primitive data collections with only a handful works of research known to the authors [11, 10]. This is primarily due to the absence of an attribute that differentiates marked items from unmarked item during insertion and detection process. This paper presents a distribution-independent, watermarking model that is secure against secondary-watermarking in addition to conventional attacks such as data addition, deletion and distortion. The low false positives and high capacity provide additional strength to the scheme. These claims are backed by experimental results provided in the paper.
Resumo:
Favel Parrett’s second novel, When the Night Comes, opens with its teenage protagonist Isla lying awake in her bunk on a night ferry to Tasmania in the mid-1980s, ‘waiting for the rough seas’. Her younger brother sleeps beside her, and her distracted, emotionally distant mother – the kind of woman who is ‘always sitting places by herself in the night’ – is smoking on deck. Together, the three are weathering the roiling overnight passage in order to escape a violent past and make a new life in Hobart. The rough seas the novel goes on to navigate are, as one might expect, both literal and metaphorical...
Resumo:
Measurements of half-field beam penumbra were taken using EBT2 film for a variety of blocking techniques. It was shown that minimizing the SSD reduces the penumbra as the effects of beam divergence are diminished. The addition of a lead block directly on the surface provides optimal results with a 10-90% penumbra of 0.53 ± 0.02 cm. To resolve the uncertainties encountered in film measurements, future Monte Carlo measurements of halffield penumbras are to be conducted.
Resumo:
With the overwhelming increase in the amount of data on the web and data bases, many text mining techniques have been proposed for mining useful patterns in text documents. Extracting closed sequential patterns using the Pattern Taxonomy Model (PTM) is one of the pruning methods to remove noisy, inconsistent, and redundant patterns. However, PTM model treats each extracted pattern as whole without considering included terms, which could affect the quality of extracted patterns. This paper propose an innovative and effective method that extends the random set to accurately weigh patterns based on their distribution in the documents and their terms distribution in patterns. Then, the proposed approach will find the specific closed sequential patterns (SCSP) based on the new calculated weight. The experimental results on Reuters Corpus Volume 1 (RCV1) data collection and TREC topics show that the proposed method significantly outperforms other state-of-the-art methods in different popular measures.