17 resultados para Syntax
em QUB Research Portal - Research Directory and Institutional Repository for Queen's University Belfast
Resumo:
Measuring the degree of inconsistency of a belief base is an important issue in many real world applications. It has been increasingly recognized that deriving syntax sensitive inconsistency measures for a belief base from its minimal inconsistent subsets is a natural way forward. Most of the current proposals along this line do not take the impact of the size of each minimal inconsistent subset into account. However, as illustrated by the well-known Lottery Paradox, as the size of a minimal inconsistent subset increases, the degree of its inconsistency decreases. Another lack in current studies in this area is about the role of free formulas of a belief base in measuring the degree of inconsistency. This has not yet been characterized well. Adding free formulas to a belief base can enlarge the set of consistent subsets of that base. However, consistent subsets of a belief base also have an impact on the syntax sensitive normalized measures of the degree of inconsistency, the reason for this is that each consistent subset can be considered as a distinctive plausible perspective reflected by that belief base,whilst eachminimal inconsistent subset projects a distinctive viewof the inconsistency. To address these two issues,we propose a normalized framework formeasuring the degree of inconsistency of a belief base which unifies the impact of both consistent subsets and minimal inconsistent subsets. We also show that this normalized framework satisfies all the properties deemed necessary by common consent to characterize an intuitively satisfactory measure of the degree of inconsistency for belief bases. Finally, we use a simple but explanatory example in equirements engineering to illustrate the application of the normalized framework.
Resumo:
Many plain text information hiding techniques demand deep semantic processing, and so suffer in reliability. In contrast, syntactic processing is a more mature and reliable technology. Assuming a perfect parser, this paper evaluates a set of automated and reversible syntactic transforms that can hide information in plain text without changing the meaning or style of a document. A large representative collection of newspaper text is fed through a prototype system. In contrast to previous work, the output is subjected to human testing to verify that the text has not been significantly compromised by the information hiding procedure, yielding a success rate of 96% and bandwidth of 0.3 bits per sentence. © 2007 SPIE-IS&T.
Resumo:
Stanley Fish in his monumental study argued that the reader of Paradise Lost is “surprised by sin” as he or she in the course of engaging with the text falls, like Adam and Eve, into sin and error and is brought up short. Through a “programme of reader harassment” the experience of the fall is re-enacted in the process of reading, wherein lies the poem’s meaning. And reader response criticism was born. But if for Fish the twentieth-century reader is “surprised by sin,” might not the twenty-first century reader, an all too frequently Latinless reader, be surprised by syntax, a syntax which despite of (or maybe because of) its inherent Latinity and associated linguistic alterity functions as a seductively attractive other? The reader, like Eve, is indeed surprised: enchanted, bemused, seduced by the abundant classicism, by the formal Latinate rhetoric achieved by a Miltonic unison of “Voice and Verse” and also by the language of a Satanic tempter who is—in the pejorative sense of the Latin adjective bilinguis—“double-tongued, deceitful, treacherous.” It is hardly an accident that this adjective (with which Milton qualifies hellish betrayal in his Latin gunpowder epic) was typically applied to the forked tongue of a serpent. This study argues that key to the success of the double-tongued Miltonic serpens bilinguis, is his use and abuse of Latinate language and rhetoric. It posits the possible case that this is mirrored in the linguistic methodology of the poeta bilinguis, the geminus Miltonus? For if, like Eve, the twenty-first century reader of Paradise Lost is surprised by syntax, by the Miltonic use and the Satanic abuse of a Latinate voice, might not he or she also be surprised by the text’s bilingual speaking voice?
Resumo:
In this paper, we propose an adaptive approach to merging possibilistic knowledge bases that deploys multiple operators instead of a single operator in the merging process. The merging approach consists of two steps: one is called the splitting step and the other is called the combination step. The splitting step splits each knowledge base into two subbases and then in the second step, different classes of subbases are combined using different operators. Our approach is applied to knowledge bases which are self-consistent and the result of merging is also a consistent knowledge base. Two operators are proposed based on two different splitting methods. Both operators result in a possibilistic knowledge base which contains more information than that obtained by the t-conorm (such as the maximum) based merging methods. In the flat case, one of the operators provides a good alternative to syntax-based merging operators in classical logic.
Resumo:
This article offers a critical conceptual discussion and refinement of Chomsky’s (2000, 2001, 2007, 2008) phase system, addressing many of the problematic aspects highlighted in the critique of Boeckx & Grohmann (2007) and seeking to resolve these issues, in particular the stipulative and arbitrary properties of phases and phase edges encoded in the (various versions of the) Phase Impenetrability Condition (PIC). Chomsky’s (2000) original conception of phases as lexical subarrays is demonstrated to derive these properties straightforwardly once a single assumption about the pairwise composition of phases is made, and the PIC is reduced to its necessary core under the Strong Minimalist Thesis (SMT)—namely, the provision of an edge. Finally, a comparison is undertaken of the lexical-subarray conception of phases with the feature-inheritance system of Chomsky 2007, 2008, in which phases are simply the locus of uninterpretable features (probes). Both conceptions are argued to conform to the SMT, and both converge on a pairwise composition of phases. However, the two conceptions of phases are argued to be mutually incompatible in numerous fundamental ways, with no current prospect of unification. The lexical-subarray conception of phases is then to be preferred on grounds of greater empirical adequacy.
Resumo:
In the throes of her mimetic exposure of the lie of phallocratic discursive unity in 'Speculum of the Other Woman', Irigaray paused on the impossibility of woman’s voice and remarked that ‘it [was] still better to speak only in riddles, allusions, hints, parables.’ Even if asked to clarify a few points. Even if people plead that they just don’t understand. After all, she said, ‘they never have understood.’ (Irigaray 1985, 143).
That the law has never understood a uniquely feminine narrative is hardly controversial, but that this erasure continues to have real and substantive consequences for justice is a reality that feminists have been compelled to remain vigilant in exposing. How does the authority of the word compound law’s exclusionary matrix? How does law remain impervious to woman’s voice and how might it hear woman’s voice? Is there capacity for a dialogic engagement between woman, parler femme, and law?
This paper will explore these questions with particular reference to the experience of women testifying to trauma during the rape trial. It will argue that a logically linked historical genealogy can be traced through which law has come to posit itself as an originary discourse by which thinking is very much conflated with being, or in other terms, law is conflated with justice. This has consequences both for women’s capacity to speak or represent the harm of rape to law, but also for law’s ability to ‘hear’ woman’s voice and objectively adjudicate in cases of rape. It will suggest that justice requires law acknowledge the presence of two distinct and different subjects and that this must be done not only at the symbolic level but also at the level of the parole, syntax and discourse.
Resumo:
Architecture Description Languages (ADLs) have emerged in recent years as a tool for providing high-level descriptions of software systems in terms of their architectural elements and the relationships among them. Most of the current ADLs exhibit limitations which prevent their widespread use in industrial applications. In this paper, we discuss these limitations and introduce ALI, an ADL that has been developed to address such limitations. The ALI language provides a rich and flexible syntax for describing component interfaces, architectural patterns, and meta-information. Multiple graphical architectural views can then be derived from ALI's textual notation.
Resumo:
The BDI architecture, where agents are modelled based on their beliefs, desires and intentions, provides a practical approach to develop large scale systems. However, it is not well suited to model complex Supervisory Control And Data Acquisition (SCADA) systems pervaded by uncertainty. In this paper we address this issue by extending the operational semantics of Can(Plan) into Can(Plan)+. We start by modelling the beliefs of an agent as a set of epistemic states where each state, possibly using a different representation, models part of the agent's beliefs. These epistemic states are stratified to make them commensurable and to reason about the uncertain beliefs of the agent. The syntax and semantics of a BDI agent are extended accordingly and we identify fragments with computationally efficient semantics. Finally, we examine how primitive actions are affected by uncertainty and we define an appropriate form of lookahead planning.
Resumo:
While video surveillance systems have become ubiquitous in our daily lives, they have introduced concerns over privacy invasion. Recent research to address these privacy issues includes a focus on privacy region protection, whereby existing video scrambling techniques are applied to specific regions of interest (ROI) in a video while the background is left unchanged. Most previous work in this area has only focussed on encrypting the sign bits of nonzero coefficients in the privacy region, which produces a relatively weak scrambling effect. In this paper, to enhance the scrambling effect for privacy protection, it is proposed to encrypt the intra prediction modes (IPM) in addition to the sign bits of nonzero coefficients (SNC) within the privacy region. A major issue with utilising encryption of IPM is that drift error is introduced outside the region of interest. Therefore, a re-encoding method, which is integrated with the encryption of IPM, is also proposed to remove drift error. Compared with a previous technique that uses encryption of IPM, the proposed re-encoding method offers savings in the bitrate overhead while completely removing the drift error. Experimental results and analysis based on H.264/AVC were carried out to verify the effectiveness of the proposed methods. In addition, a spiral binary mask mechanism is proposed that can reduce the bitrate overhead incurred by flagging the position of the privacy region. A definition of the syntax structure for the spiral binary mask is given. As a result of the proposed techniques, the privacy regions in a video sequence can be effectively protected by the enhanced scrambling effect with no drift error and a lower bitrate overhead.