195 resultados para Set-valued map


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Fuzzy answer set programming (FASP) is a generalization of answer set programming to continuous domains. As it can not readily take uncertainty into account, however, FASP is not suitable as a basis for approximate reasoning and cannot easily be used to derive conclusions from imprecise information. To cope with this, we propose an extension of FASP based on possibility theory. The resulting framework allows us to reason about uncertain information in continuous domains, and thus also about information that is imprecise or vague. We propose a syntactic procedure, based on an immediate consequence operator, and provide a characterization in terms of minimal models, which allows us to straightforwardly implement our framework using existing FASP solvers.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Conventional practice in Regional Geochemistry includes as a final step of any geochemical campaign the generation of a series of maps, to show the spatial distribution of each of the components considered. Such maps, though necessary, do not comply with the compositional, relative nature of the data, which unfortunately make any conclusion based on them sensitive
to spurious correlation problems. This is one of the reasons why these maps are never interpreted isolated. This contribution aims at gathering a series of statistical methods to produce individual maps of multiplicative combinations of components (logcontrasts), much in the flavor of equilibrium constants, which are designed on purpose to capture certain aspects of the data.
We distinguish between supervised and unsupervised methods, where the first require an external, non-compositional variable (besides the compositional geochemical information) available in an analogous training set. This external variable can be a quantity (soil density, collocated magnetics, collocated ratio of Th/U spectral gamma counts, proportion of clay particle fraction, etc) or a category (rock type, land use type, etc). In the supervised methods, a regression-like model between the external variable and the geochemical composition is derived in the training set, and then this model is mapped on the whole region. This case is illustrated with the Tellus dataset, covering Northern Ireland at a density of 1 soil sample per 2 square km, where we map the presence of blanket peat and the underlying geology. The unsupervised methods considered include principal components and principal balances
(Pawlowsky-Glahn et al., CoDaWork2013), i.e. logcontrasts of the data that are devised to capture very large variability or else be quasi-constant. Using the Tellus dataset again, it is found that geological features are highlighted by the quasi-constant ratios Hf/Nb and their ratio against SiO2; Rb/K2O and Zr/Na2O and the balance between these two groups of two variables; the balance of Al2O3 and TiO2 vs. MgO; or the balance of Cr, Ni and Co vs. V and Fe2O3. The largest variability appears to be related to the presence/absence of peat.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We study the computational complexity of finding maximum a posteriori configurations in Bayesian networks whose probabilities are specified by logical formulas. This approach leads to a fine grained study in which local information such as context-sensitive independence and determinism can be considered. It also allows us to characterize more precisely the jump from tractability to NP-hardness and beyond, and to consider the complexity introduced by evidence alone.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Online forums are becoming a popular way of finding useful
information on the web. Search over forums for existing discussion
threads so far is limited to keyword-based search due
to the minimal effort required on part of the users. However,
it is often not possible to capture all the relevant context in a
complex query using a small number of keywords. Examplebased
search that retrieves similar discussion threads given
one exemplary thread is an alternate approach that can help
the user provide richer context and vastly improve forum
search results. In this paper, we address the problem of
finding similar threads to a given thread. Towards this, we
propose a novel methodology to estimate similarity between
discussion threads. Our method exploits the thread structure
to decompose threads in to set of weighted overlapping
components. It then estimates pairwise thread similarities
by quantifying how well the information in the threads are
mutually contained within each other using lexical similarities
between their underlying components. We compare our
proposed methods on real datasets against state-of-the-art
thread retrieval mechanisms wherein we illustrate that our
techniques outperform others by large margins on popular
retrieval evaluation measures such as NDCG, MAP, Precision@k
and MRR. In particular, consistent improvements of
up to 10% are observed on all evaluation measures

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Predicting the next location of a user based on their previous visiting pattern is one of the primary tasks over data from location based social networks (LBSNs) such as Foursquare. Many different aspects of these so-called “check-in” profiles of a user have been made use of in this task, including spatial and temporal information of check-ins as well as the social network information of the user. Building more sophisticated prediction models by enriching these check-in data by combining them with information from other sources is challenging due to the limited data that these LBSNs expose due to privacy concerns. In this paper, we propose a framework to use the location data from LBSNs, combine it with the data from maps for associating a set of venue categories with these locations. For example, if the user is found to be checking in at a mall that has cafes, cinemas and restaurants according to the map, all these information is associated. This category information is then leveraged to predict the next checkin location by the user. Our experiments with publicly available check-in dataset show that this approach improves on the state-of-the-art methods for location prediction.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Many problems in artificial intelligence can be encoded as answer set programs (ASP) in which some rules are uncertain. ASP programs with incorrect rules may have erroneous conclusions, but due to the non-monotonic nature of ASP, omitting a correct rule may also lead to errors. To derive the most certain conclusions from an uncertain ASP program, we thus need to consider all situations in which some, none, or all of the least certain rules are omitted. This corresponds to treating some rules as optional and reasoning about which conclusions remain valid regardless of the inclusion of these optional rules. While a version of possibilistic ASP (PASP) based on this view has recently been introduced, no implementation is currently available. In this paper we propose a simulation of the main reasoning tasks in PASP using (disjunctive) ASP programs, allowing us to take advantage of state-of-the-art ASP solvers. Furthermore, we identify how several interesting AI problems can be naturally seen as special cases of the considered reasoning tasks, including cautious abductive reasoning and conformant planning. As such, the proposed simulation enables us to solve instances of the latter problem types that are more general than what current solvers can handle.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Background: Search filters are combinations of words and phrases designed to retrieve an optimal set of records on a particular topic (subject filters) or study design (methodological filters). Information specialists are increasingly turning to reusable filters to focus their searches. However, the extent of the academic literature on search filters is unknown. We provide a broad overview to the academic literature on search filters.
Objectives: To map the academic literature on search filters from 2004 to 2015 using a novel form of content analysis.
Methods: We conducted a comprehensive search for literature between 2004 and 2015 across eight databases using a subjectively derived search strategy. We identified key words from titles, grouped them into categories, and examined their frequency and co-occurrences.
Results: The majority of records were housed in Embase (n = 178) and MEDLINE (n = 154). Over the last decade, both databases appeared to exhibit a bimodal distribution with the number of publications on search filters rising until 2006, before dipping in 2007, and steadily increasing until 2012. Few articles appeared in social science databases over the same time frame (e.g. Social Services Abstracts, n = 3).
Unsurprisingly, the term ‘search’ appeared in most titles, and quite often, was used as a noun adjunct for the word 'filter' and ‘strategy’. Across the papers, the purpose of searches as a means of 'identifying' information and gathering ‘evidence’ from 'databases' emerged quite strongly. Other terms relating to the methodological assessment of search filters, such as precision and validation, also appeared albeit less frequently.
Conclusions: Our findings show surprising commonality across the papers with regard to the literature on search filters. Much of the literature seems to be focused on developing search filters to identify and retrieve information, as opposed to testing or validating such filters. Furthermore, the literature is mostly housed in health-related databases, namely MEDLINE, CINAHL, and Embase, implying that it is medically driven. Relatively few papers focus on the use of search filters in the social sciences.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Seafloor massive sulfide (SMS) mining will likely occur at hydrothermal systems in the near future. Alongside their mineral wealth, SMS deposits also have considerable biological value. Active SMS deposits host endemic hydrothermal vent communities, whilst inactive deposits support communities of deep water corals and other suspension feeders. Mining activities are expected to remove all large organisms and suitable habitat in the immediate area, making vent endemic organisms particularly at risk from habitat loss and localised extinction. As part of environmental management strategies designed to mitigate the effects of mining, areas of seabed need to be protected to preserve biodiversity that is lost at the mine site and to preserve communities that support connectivity among populations of vent animals in the surrounding region. These "set-aside" areas need to be biologically similar to the mine site and be suitably connected, mostly by transport of larvae, to neighbouring sites to ensure exchange of genetic material among remaining populations. Establishing suitable set-asides can be a formidable task for environmental managers, however the application of genetic approaches can aid set-aside identification, suitability assessment and monitoring. There are many genetic tools available, including analysis of mitochondrial DNA (mtDNA) sequences (e.g. COI or other suitable mtDNA genes) and appropriate nuclear DNA markers (e.g. microsatellites, single nucleotide polymorphisms), environmental DNA (eDNA) techniques and microbial metagenomics. When used in concert with traditional biological survey techniques, these tools can help to identify species, assess the genetic connectivity among populations and assess the diversity of communities. How these techniques can be applied to set-aside decision making is discussed and recommendations are made for the genetic characteristics of set-aside sites. A checklist for environmental regulators forms a guide to aid decision making on the suitability of set-aside design and assessment using genetic tools. This non-technical primer document represents the views of participants in the VentBase 2014 workshop.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

BaH (and its isotopomers) is an attractive molecular candidate for laser cooling to ultracold temperatures and a potential precursor for the production of ultracold gases of hydrogen and deuterium. The theoretical challenge is to simulate the laser cooling cycle as reliably as possible and this paper addresses the generation of a highly accurate ab initio $^{2}\Sigma^+$ potential for such studies. The performance of various basis sets within the multi-reference configuration-interaction (MRCI) approximation with the Davidson correction (MRCI+Q)is tested and taken to the Complete Basis Set (CBS) limit. It is shown that the calculated molecular constants using a 46 electron Effective Core-Potential (ECP) and even-tempered augmented polarized core-valence basis sets (aug-pCV$n$Z-PP, n= 4 and 5) but only including three active electrons in the MRCI calculation are in excellent agreement with the available experimental values. The predicted dissociation energy De for the X$^2\Sigma^+$ state (extrapolated to the CBS limit) is 16895.12 cm$^{-1}$ (2.094 eV), which agrees within 0.1$\%$ of a revised experimental value of <16910.6 cm$^{-1}$, while the calculated re is within 0.03 pm of the experimental result.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This article attempts to push Mauss’ work on the sociality of prayer (1909) to its fullest conclusion by arguing that prayer can be viewed anthropologically as providing a map for social and emotional relatedness. Based on fieldwork among deep-sea fisher families living in Gamrie, North-East Scotland (home to 700 people and six Protestant churches), the author takes as his primary ethnographic departure the ritual of the ‘mid-week prayer meeting’. Among the self-proclaimed ‘fundamentalists’ of Gamrie’s Brethren and Presbyterian churches, attending the prayer meeting means praying for salvation. Yet, contrary to the stereotype of Protestant soteriology as highly individualist, in the context of Gamrie, salvation is not principally focused upon the self, but is instead sought on behalf of the ‘unconverted’ other. Locally, this ‘other’ is made sense of with reference to three different categories of relatedness: the family, the village and the nation. The author’s argument is that each category of relatedness carries with it a different affective quality: anguish for one’s family, resentment toward one’s village, and resignation towards one’s nation. As such, prayers for salvation establish and maintain not only vertical – human-divine – relatedness, but also horizontal relatedness between persons, while also giving them their emotional tenor. In ‘fundamentalist’ Gamrie, these human relationships, and crucially their affective asymmetries, may be mapped, therefore, by treating prayers as social phenomena that seek to engage with a world dichotomised into vice and virtue, rebellion and submission, and, ultimately, damnation and salvation.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Background: Gene expression connectivity mapping has proven to be a powerful and flexible tool for research. Its application has been shown in a broad range of research topics, most commonly as a means of identifying potential small molecule compounds, which may be further investigated as candidates for repurposing to treat diseases. The public release of voluminous data from the Library of Integrated Cellular Signatures (LINCS) programme further enhanced the utilities and potentials of gene expression connectivity mapping in biomedicine. Results: We describe QUADrATiC (http://go.qub.ac.uk/QUADrATiC), a user-friendly tool for the exploration of gene expression connectivity on the subset of the LINCS data set corresponding to FDA-approved small molecule compounds. It enables the identification of compounds for repurposing therapeutic potentials. The software is designed to cope with the increased volume of data over existing tools, by taking advantage of multicore computing architectures to provide a scalable solution, which may be installed and operated on a range of computers, from laptops to servers. This scalability is provided by the use of the modern concurrent programming paradigm provided by the Akka framework. The QUADrATiC Graphical User Interface (GUI) has been developed using advanced Javascript frameworks, providing novel visualization capabilities for further analysis of connections. There is also a web services interface, allowing integration with other programs or scripts.Conclusions: QUADrATiC has been shown to provide an improvement over existing connectivity map software, in terms of scope (based on the LINCS data set), applicability (using FDA-approved compounds), usability and speed. It offers potential to biological researchers to analyze transcriptional data and generate potential therapeutics for focussed study in the lab. QUADrATiC represents a step change in the process of investigating gene expression connectivity and provides more biologically-relevant results than previous alternative solutions.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

There is widespread acceptance that clinical educators should be trained to teach, but faculty development for clinicians is undermined by poor attendance and inadequate learning transfer. As a result there has been growing interest in situating teacher development initiatives in clinical workplaces. The relationship between becoming a teacher and clinical workplace contexts is under theorised. In response, this qualitative research set out to explore how clinicians become teachers in relation to clinical communities and institutions. Using communities of practice (CoP) as a conceptual framework this research employed the sensitising concepts of regimes of competence and vertical (managerial) and horizontal (professional) planes of accountability to elucidate structural influences on teacher development. Fourteen hospital physicians completed developmental timelines and underwent semi-structured interviews, exploring their development as teachers. Despite having very different developmental pathways, participants’ descriptions of their teacher identities and practice that were remarkably congruent. Two types of CoP occupied the horizontal plane of accountability i.e. clinical teams (Firms) and communities of junior doctors (Fraternities). Participants reproduced teacher identities and practice that were congruent with CoPs’ regimes of competence in order to gain recognition and legitimacy. Participants also constructed their teacher identities in relation to institutions in the vertical plane of accountability (i.e. hospitals and medical schools). Institutions that valued teaching supported the development of teacher identities along institutionally defined lines. Where teaching was less valued, clinicians adapted their teacher identities and practices to suit institutional norms. Becoming a clinical educator entails continually negotiating one’s identity and practice between two potentially conflicting planes of accountability. Clinical CoPs are largely conservative and reproductive of teaching practice whereas accountability to institutions is potentially disruptive of teacher identity and practice.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Morphological changes in the retinal vascular network are associated with future risk of many systemic and vascular diseases. However, uncertainty over the presence and nature of some of these associations exists. Analysis of data from large population based studies will help to resolve these uncertainties. The QUARTZ (QUantitative Analysis of Retinal vessel Topology and siZe) retinal image analysis system allows automated processing of large numbers of retinal images. However, an image quality assessment module is needed to achieve full automation. In this paper, we propose such an algorithm, which uses the segmented vessel map to determine the suitability of retinal images for use in the creation of vessel morphometric data suitable for epidemiological studies. This includes an effective 3-dimensional feature set and support vector machine classification. A random subset of 800 retinal images from UK Biobank (a large prospective study of 500,000 middle aged adults; where 68,151 underwent retinal imaging) was used to examine the performance of the image quality algorithm. The algorithm achieved a sensitivity of 95.33% and a specificity of 91.13% for the detection of inadequate images. The strong performance of this image quality algorithm will make rapid automated analysis of vascular morphometry feasible on the entire UK Biobank dataset (and other large retinal datasets), with minimal operator involvement, and at low cost.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The research presented, investigates the optimal set of operational codes (opcodes) that create a robust indicator of malicious software (malware) and also determines a program’s execution duration for accurate classification of benign and malicious software. The features extracted from the dataset are opcode density histograms, extracted during the program execution. The classifier used is a support vector machine and is configured to select those features to produce the optimal classification of malware over different program run lengths. The findings demonstrate that malware can be detected using dynamic analysis with relatively few opcodes.