984 resultados para heterocyclic bases
Resumo:
The main result of the note is a characterization of 1-amenability of Banach algebras of approximable operators for a class of Banach spaces with 1-unconditional bases in terms of a new basis property. It is also shown that amenability and symmetric amenability are equivalent concepts for Banach algebras of approximable operators, and that a type of Banach space that was long suspected to lack property A has in fact the property. Some further ideas on the problem of whether or not amenability (in this setting) implies property A are discussed.
Resumo:
We present a first-principles molecular dynamics study of an excess electron in condensed phase models of solvated DNA bases. Calculations on increasingly large microsolvated clusters taken from liquid phase simulations show that adiabatic electron affinities increase systematically upon solvation, as for optimized gas-phase geometries. Dynamical simulations after vertical attachment indicate that the excess electron, which is initially found delocalized, localizes around the nucleobases within a 15 fs time scale. This transition requires small rearrangements in the geometry of the bases.
Resumo:
Measuring the degree of inconsistency of a belief base is an important issue in many real world applications. It has been increasingly recognized that deriving syntax sensitive inconsistency measures for a belief base from its minimal inconsistent subsets is a natural way forward. Most of the current proposals along this line do not take the impact of the size of each minimal inconsistent subset into account. However, as illustrated by the well-known Lottery Paradox, as the size of a minimal inconsistent subset increases, the degree of its inconsistency decreases. Another lack in current studies in this area is about the role of free formulas of a belief base in measuring the degree of inconsistency. This has not yet been characterized well. Adding free formulas to a belief base can enlarge the set of consistent subsets of that base. However, consistent subsets of a belief base also have an impact on the syntax sensitive normalized measures of the degree of inconsistency, the reason for this is that each consistent subset can be considered as a distinctive plausible perspective reflected by that belief base,whilst eachminimal inconsistent subset projects a distinctive viewof the inconsistency. To address these two issues,we propose a normalized framework formeasuring the degree of inconsistency of a belief base which unifies the impact of both consistent subsets and minimal inconsistent subsets. We also show that this normalized framework satisfies all the properties deemed necessary by common consent to characterize an intuitively satisfactory measure of the degree of inconsistency for belief bases. Finally, we use a simple but explanatory example in equirements engineering to illustrate the application of the normalized framework.
Resumo:
It is increasingly recognized that identifying the degree of blame or responsibility of each formula for inconsistency of a knowledge base (i.e. a set of formulas) is useful for making rational decisions to resolve inconsistency in that knowledge base. Most current techniques for measuring the blame of each formula with regard to an inconsistent knowledge base focus on classical knowledge bases only. Proposals for measuring the blames of formulas with regard to an inconsistent prioritized knowledge base have not yet been given much consideration. However, the notion of priority is important in inconsistency-tolerant reasoning. This article investigates this issue and presents a family of measurements for the degree of blame of each formula in an inconsistent prioritized knowledge base by using the minimal inconsistent subsets of that knowledge base. First of all, we present a set of intuitive postulates as general criteria to characterize rational measurements for the blames of formulas of an inconsistent prioritized knowledge base. Then we present a family of measurements for the blame of each formula in an inconsistent prioritized knowledge base under the guidance of the principle of proportionality, one of the intuitive postulates. We also demonstrate that each of these measurements possesses the properties that it ought to have. Finally, we use a simple but explanatory example in requirements engineering to illustrate the application of these measurements. Compared to the related works, the postulates presented in this article consider the special characteristics of minimal inconsistent subsets as well as the priority levels of formulas. This makes them more appropriate to characterizing the inconsistency measures defined from minimal inconsistent subsets for prioritized knowledge bases as well as classical knowledge bases. Correspondingly, the measures guided by these postulates can intuitively capture the inconsistency for prioritized knowledge bases.
Resumo:
Belief merging is an important but difficult problem in Artificial Intelligence, especially when sources of information are pervaded with uncertainty. Many merging operators have been proposed to deal with this problem in possibilistic logic, a weighted logic which is powerful for handling inconsistency and deal-ing with uncertainty. They often result in a possibilistic knowledge base which is a set of weighted formulas. Although possibilistic logic is inconsistency tolerant, it suffers from the well-known "drowning effect". Therefore, we may still want to obtain a consistent possibilistic knowledge base as the result of merging. In such a case, we argue that it is not always necessary to keep weighted information after merging. In this paper, we define a merging operator that maps a set of possibilistic knowledge bases and a formula representing the integrity constraints to a classical knowledge base by using lexicographic ordering. We show that it satisfies nine postulates that generalize basic postulates for propositional merging given in [11]. These postulates capture the principle of minimal change in some sense. We then provide an algorithm for generating the resulting knowledge base of our merging operator. Finally, we discuss the compatibility of our merging operator with propositional merging and establish the advantage of our merging operator over existing semantic merging operators in the propositional case.
Resumo:
Heterocyclic aromatic amines (HCA) are carcinogenic mutagens formed during cooking of proteinaceous foods, particularly meat. To assist in the ongoing search for biomarkers of HCA exposure in blood, a method is described for the extraction from human plasma of the most abundant HCAs: 2-Amino-1-methyl-6-phenylimidazo(4,5-b)pyridine (PhIP), 2-amino-3,8-dimethylimidazo[4,5-f]quinoxaline (MeIQx) and 2-amino-3,4,8-trimethylimidazo[4,5-f]quinoxaline (4,8-DiMeIQx) (and its isomer 7,8-DiMeIQx), using Hollow Fibre Membrane Liquid-Phase Microextraction. This technique employs 2.5 cm lengths of porous polypropylene fibres impregnated with organic solvent to facilitate simultaneous extraction from an alkaline aqueous sample into a low volume acidic acceptor phase. This low cost protocol is extensively optimised for fibre length, extraction time, sample pH and volume. Detection is by UPLC-MS/MS using positive mode electrospray ionisation with a 3.4 min runtime, with optimum peak shape, sensitivity and baseline separation being achieved at pH 9.5. To our knowledge this is the first description of HCA chromatography under alkaline conditions. Application of fixed ion ratio tolerances for confirmation of analyte identity is discussed. Assay precision is between 4.5 and 8.8% while lower limits of detection between 2 and 5 pg/mL are below the concentrations postulated for acid-labile HCA-protein adducts in blood.
Resumo:
There is extensive theoretical work on measures of inconsistency for arbitrary formulae in knowledge bases. Many of these are defined in terms of the set of minimal inconsistent subsets (MISes) of the base. However, few have been implemented or experimentally evaluated to support their viability, since computing all MISes is intractable in the worst case. Fortunately, recent work on a related problem of minimal unsatisfiable sets of clauses (MUSes) offers a viable solution in many cases. In this paper, we begin by drawing connections between MISes and MUSes through algorithms based on a MUS generalization approach and a new optimized MUS transformation approach to finding MISes. We implement these algorithms, along with a selection of existing measures for flat and stratified knowledge bases, in a tool called mimus. We then carry out an extensive experimental evaluation of mimus using randomly generated arbitrary knowledge bases. We conclude that these measures are viable for many large and complex random instances. Moreover, they represent a practical and intuitive tool for inconsistency handling.
Resumo:
Knowledge is an important component in many intelligent systems.
Since items of knowledge in a knowledge base can be conflicting, especially if
there are multiple sources contributing to the knowledge in this base, significant
research efforts have been made on developing inconsistency measures for
knowledge bases and on developing merging approaches. Most of these efforts
start with flat knowledge bases. However, in many real-world applications, items
of knowledge are not perceived with equal importance, rather, weights (which
can be used to indicate the importance or priority) are associated with items of
knowledge. Therefore, measuring the inconsistency of a knowledge base with
weighted formulae as well as their merging is an important but difficult task. In
this paper, we derive a numerical characteristic function from each knowledge
base with weighted formulae, based on the Dempster-Shafer theory of evidence.
Using these functions, we are able to measure the inconsistency of the knowledge
base in a convenient and rational way, and are able to merge multiple knowledge
bases with weighted formulae, even if knowledge in these bases may be
inconsistent. Furthermore, by examining whether multiple knowledge bases are
dependent or independent, they can be combined in different ways using their
characteristic functions, which cannot be handled (or at least have never been
considered) in classic knowledge based merging approaches in the literature.
Resumo:
Intake of heterocyclic amines (HCAs, carcinogens produced during cooking of meat/fish, the most abundant being PhIP, DiMeIQx and MeIQx) is influenced by many factors including type/thickness of meat and cooking method/temperature/duration. Thus, assessment of HCA dietary exposure is difficult. Protein adducts of HCAs have been proposed as potential medium-term biomarkers of exposure, e.g. PhIP adducted to serum albumin or haemoglobin. However, evidence is still lacking that HCA adducts are viable biomarkers in humans consuming normal diets. The FoodCAP project, supported by World Cancer Research Fund, developed a highly sensitive mass spectrometric method for hydrolysis, extraction and detection of acid-labile HCAs in blood and assessed their validity as biomarkers of exposure. Multiple acid/alkaline hydrolysis conditions were assessed, followed by liquid-liquid extraction, clean-up by cation-exchange SPE and quantification by UPLC-ESI-MS/ MS. Blood was analysed from volunteers who completed food diaries to estimate HCA intake based on the US National Cancer Institute’s CHARRED database. Standard HCAs were recovered quantitatively from fortified blood. In addition, PhIP/MeIQx adducts bound to albumin and haemoglobin prepared in vitro using a human liver microsome system were also detectable in blood fortified at low ppt concentrations. However, except for one sample (5pg/ml PhIP), acid-labile PhIP, 7,8-DiMeIQx, 4,8-DiMeIQx and MeIQx were not observed above the 2pg/ml limit of detection in plasma (n=35), or in serum, whole blood or purified albumin, even in volunteers with high meat consumption (nominal HCA intake >2µg/day). It is concluded that HCA blood protein adducts are not viable biomarkers of exposure. Untargeted metabolomic analyses may facilitate discovery of suitable markers.