937 resultados para Distributional semantics
Resumo:
A core activity in information systems development involves understanding the
conceptual model of the domain that the information system supports. Any conceptual model is ultimately created using a conceptual-modeling (CM) grammar. Accordingly, just as high quality conceptual models facilitate high quality systems development, high quality CM grammars facilitate high quality conceptual modeling. This paper seeks to provide a new perspective on improving the quality of CM grammar semantics. For the past twenty years, the leading approach to this topic has drawn on ontological theory. However, the ontological approach captures just half of the story. It needs to be coupled with a logical approach. We show how ontological quality and logical quality interrelate and we outline three contributions of a logical approach: the ability to see familiar conceptualmodeling problems in simpler ways, the illumination of new problems, and the ability to prove the benefit of modifying CM grammars.
Resumo:
The book’s main contribution is the bringing together of varied discourses concerning the social policy impact of ageing within the context of fiscal austerity. As the editors rightly state, the economic recession has sharpened the focus of governments on the implication of demographic ageing. It is vital therefore, that the social policy implications of societal ageing are studied and understood within a wider political economy of austerity. Of course the fiscal crisis of the 1970s and the ensuing first wave of neo-liberalism in the Anglo-Saxon countries [in the 1980s] gave us a foretaste of the various ways in which the public burden thesis has been applied with great force to the older population. This recession is different, certainly in Ireland, but a combination of neo-liberal ideology and neo-classical economics is enforcing severe budgetary constraint on a range of countries (within and outside of the Eurozone) in the name of funding deficits. Policy makers appear to be disinterested in both the origins of the 2008 financial crisis and the distributional consequences of their austerity policies. In the absence of official concern social science research has a key role to play.
Resumo:
The BDI architecture, where agents are modelled based on their beliefs, desires and intentions, provides a practical approach to develop large scale systems. However, it is not well suited to model complex Supervisory Control And Data Acquisition (SCADA) systems pervaded by uncertainty. In this paper we address this issue by extending the operational semantics of Can(Plan) into Can(Plan)+. We start by modelling the beliefs of an agent as a set of epistemic states where each state, possibly using a different representation, models part of the agent's beliefs. These epistemic states are stratified to make them commensurable and to reason about the uncertain beliefs of the agent. The syntax and semantics of a BDI agent are extended accordingly and we identify fragments with computationally efficient semantics. Finally, we examine how primitive actions are affected by uncertainty and we define an appropriate form of lookahead planning.
Resumo:
The spatial distribution of a species can be characterized at many different spatial scales, from fine-scale measures of local population density to coarse-scale geographical-range structure. Previous studies have shown a degree of correlation in species' distribution patterns across narrow ranges of scales, making it possible to predict fine-scale properties from coarser-scale distributions. To test the limits of such extrapolation, we have compiled distributional information on 16 species of British plants, at scales ranging across six orders of magnitude in linear resolution (1 in to 100 km). As expected, the correlation between patterns at different spatial scales tends to degrade as the scales become more widely separated. There is, however, an abrupt breakdown in cross-scale correlations across intermediate (ca. 0.5 km) scales, suggesting that local and regional patterns are influenced by essentially non-overlapping sets of processes. The scaling discontinuity may also reflect characteristic scales of human land use in Britain, suggesting a novel method for analysing the 'footprint' of humanity on a landscape.
Resumo:
In Britain, the majority of Lower and Middle Paleolithic archaeological finds come from river terrace deposits. The impressive “staircase” terrace sequences of southeast England, and research facilitated by aggregate extraction have provided a considerable body of knowledge about the terrace chronology and associated archaeology in that area. Such research has been essential in considering rates of uplift, climatic cycles, archaeological chronologies, and the landscapes in which hominins lived. It has also promoted the view that southeast England was a major hominin route into Britain. By contrast, the terrace deposits of the southwest have been little studied. The Palaeolithic Rivers of South West Britain (PRoSWEB) project employed a range of geoarchaeological methodologies to address similar questions at different scales, focusing on the rivers Exe, Axe, Otter, and the paleo-Doniford, all of which were located south of the maximum Pleistocene glacial limit (marine oxygen isotope stage [MIS] 4–2). Preliminary analysis of the fieldwork results suggests that although the evolution of these catchments is complex, most conform to a standard staircase-type model, with the exception of the Axe, and, to a lesser extent, the paleo-Doniford, which are anomalous. Although the terrace deposits are less extensive than in southeast Britain, differentiation between terraces does exist, and new dates show that some of these terraces are of great antiquity (MIS 10+). The project also reexamined the distribution of artifacts in the region and confirms the distributional bias to the river valleys, and particularly the rivers draining southward to the paleo–Channel River system. This distribution is consistent with a model of periodic occupation of the British peninsula along and up the major river valleys from the paleo–Channel River corridor. These data have a direct impact on our understanding of the paleolandscapes of the southwest region, and therefore our interpretations of the Paleolithic occupation of the edge of the continental landmass.
Resumo:
This book provides a comprehensive tutorial on similarity operators. The authors systematically survey the set of similarity operators, primarily focusing on their semantics, while also touching upon mechanisms for processing them effectively.
The book starts off by providing introductory material on similarity search systems, highlighting the central role of similarity operators in such systems. This is followed by a systematic categorized overview of the variety of similarity operators that have been proposed in literature over the last two decades, including advanced operators such as RkNN, Reverse k-Ranks, Skyline k-Groups and K-N-Match. Since indexing is a core technology in the practical implementation of similarity operators, various indexing mechanisms are summarized. Finally, current research challenges are outlined, so as to enable interested readers to identify potential directions for future investigations.
In summary, this book offers a comprehensive overview of the field of similarity search operators, allowing readers to understand the area of similarity operators as it stands today, and in addition providing them with the background needed to understand recent novel approaches.
Resumo:
Although Answer Set Programming (ASP) is a powerful framework for declarative problem solving, it cannot in an intuitive way handle situations in which some rules are uncertain, or in which it is more important to satisfy some constraints than others. Possibilistic ASP (PASP) is a natural extension of ASP in which certainty weights are associated with each rule. In this paper we contrast two different views on interpreting the weights attached to rules. Under the first view, weights reflect the certainty with which we can conclude the head of a rule when its body is satisfied. Under the second view, weights reflect the certainty that a given rule restricts the considered epistemic states of an agent in a valid way, i.e. it is the certainty that the rule itself is correct. The first view gives rise to a set of weighted answer sets, whereas the second view gives rise to a weighted set of classical answer sets.
Resumo:
Answer Set Programming (ASP) is a popular framework for modelling combinatorial problems. However, ASP cannot be used easily for reasoning about uncertain information. Possibilistic ASP (PASP) is an extension of ASP that combines possibilistic logic and ASP. In PASP a weight is associated with each rule, whereas this weight is interpreted as the certainty with which the conclusion can be established when the body is known to hold. As such, it allows us to model and reason about uncertain information in an intuitive way. In this paper we present new semantics for PASP in which rules are interpreted as constraints on possibility distributions. Special models of these constraints are then identified as possibilistic answer sets. In addition, since ASP is a special case of PASP in which all the rules are entirely certain, we obtain a new characterization of ASP in terms of constraints on possibility distributions. This allows us to uncover a new form of disjunction, called weak disjunction, that has not been previously considered in the literature. In addition to introducing and motivating the semantics of weak disjunction, we also pinpoint its computational complexity. In particular, while the complexity of most reasoning tasks coincides with standard disjunctive ASP, we find that brave reasoning for programs with weak disjunctions is easier.
Resumo:
In this study, we introduce an original distance definition for graphs, called the Markov-inverse-F measure (MiF). This measure enables the integration of classical graph theory indices with new knowledge pertaining to structural feature extraction from semantic networks. MiF improves the conventional Jaccard and/or Simpson indices, and reconciles both the geodesic information (random walk) and co-occurrence adjustment (degree balance and distribution). We measure the effectiveness of graph-based coefficients through the application of linguistic graph information for a neural activity recorded during conceptual processing in the human brain. Specifically, the MiF distance is computed between each of the nouns used in a previous neural experiment and each of the in-between words in a subgraph derived from the Edinburgh Word Association Thesaurus of English. From the MiF-based information matrix, a machine learning model can accurately obtain a scalar parameter that specifies the degree to which each voxel in (the MRI image of) the brain is activated by each word or each principal component of the intermediate semantic features. Furthermore, correlating the voxel information with the MiF-based principal components, a new computational neurolinguistics model with a network connectivity paradigm is created. This allows two dimensions of context space to be incorporated with both semantic and neural distributional representations.
Resumo:
The last three decades have seen social enterprises in the United Kingdom pushed to the forefront of welfare delivery, workfare and area-based regeneration. For critics, this is repositioning the sector around a neoliberal politics that privileges marketization, state roll-back and disciplining community groups to become more self-reliant. Successive governments have developed bespoke products, fiscal instruments and intermediaries to enable and extend the social finance market. Such assemblages are critical to roll-out tactics, but they are also necessary and useful for more reformist understandings of economic alterity. The issue is not social finance itself but how it is used, which inevitably entangles social enterprises in a form of legitimation crises between the need to satisfy financial returns and at the same time keep community interests on board. This paper argues that social finance, how it is used, politically domesticated and achieves re-distributional outcomes is a necessary component of counter-hegemonic strategies. Such assemblages are as important to radical community development as they are to neoliberalism and the analysis concludes by highlighting the need to develop a better understanding of finance, the ethics of its use and tactical compromises in scaling it as an alternative to public and private markets.
Resumo:
In the context of monolingual and bilingual retrieval, Simple Knowledge Organisation System (SKOS) datasets can play a dual role as knowledge bases for semantic annotations and as language-independent resources for translation. With no existing track of formal evaluations of these aspects for datasets in SKOS format, we describe a case study on the usage of the Thesaurus for the Social Sciences in SKOS format for a retrieval setup based on the CLEF 2004-2006 Domain-Specific Track topics, documents and relevance assessments. Results showed a mixed picture with significant system-level improvements in terms of mean average precision in the bilingual runs. Our experiments set a new and improved baseline for using SKOS-based datasets with the GIRT collection and are an example of component-based evaluation.
Resumo:
This paper reports on issues at the interface between semantics and lexicography that arose out of the data collection and classification of vocabulary in Anglo-Norman and Middle English in order to create a bilingual thesaurus of everyday life in medieval England. The Bilingual Thesaurus project is based at Birmingham City University and the University of Westminster. Issues to be resolved included the definition of an occupational domain; the creation of a methodology of data collection; the delimitation of domain-specific vocabulary; making distinctions between sense and usage; and the categorisation of the lexical items. Some of these issues are general to thesaurus-making, some are specific to the making of historical thesauruses, while some are unique to the production of a thesaurus of two languages whose use overlapped for several centuries in the late medieval period in England.
Resumo:
After a historical introduction, the bulk of the thesis concerns the study of a declarative semantics for logic programs. The main original contributions are: ² WFSX (Well–Founded Semantics with eXplicit negation), a new semantics for logic programs with explicit negation (i.e. extended logic programs), which compares favourably in its properties with other extant semantics. ² A generic characterization schema that facilitates comparisons among a diversity of semantics of extended logic programs, including WFSX. ² An autoepistemic and a default logic corresponding to WFSX, which solve existing problems of the classical approaches to autoepistemic and default logics, and clarify the meaning of explicit negation in logic programs. ² A framework for defining a spectrum of semantics of extended logic programs based on the abduction of negative hypotheses. This framework allows for the characterization of different levels of scepticism/credulity, consensuality, and argumentation. One of the semantics of abduction coincides with WFSX. ² O–semantics, a semantics that uniquely adds more CWA hypotheses to WFSX. The techniques used for doing so are applicable as well to the well–founded semantics of normal logic programs. ² By introducing explicit negation into logic programs contradiction may appear. I present two approaches for dealing with contradiction, and show their equivalence. One of the approaches consists in avoiding contradiction, and is based on restrictions in the adoption of abductive hypotheses. The other approach consists in removing contradiction, and is based in a transformation of contradictory programs into noncontradictory ones, guided by the reasons for contradiction.
Resumo:
Préface My thesis consists of three essays where I consider equilibrium asset prices and investment strategies when the market is likely to experience crashes and possibly sharp windfalls. Although each part is written as an independent and self contained article, the papers share a common behavioral approach in representing investors preferences regarding to extremal returns. Investors utility is defined over their relative performance rather than over their final wealth position, a method first proposed by Markowitz (1952b) and by Kahneman and Tversky (1979), that I extend to incorporate preferences over extremal outcomes. With the failure of the traditional expected utility models in reproducing the observed stylized features of financial markets, the Prospect theory of Kahneman and Tversky (1979) offered the first significant alternative to the expected utility paradigm by considering that people focus on gains and losses rather than on final positions. Under this setting, Barberis, Huang, and Santos (2000) and McQueen and Vorkink (2004) were able to build a representative agent optimization model which solution reproduced some of the observed risk premium and excess volatility. The research in behavioral finance is relatively new and its potential still to explore. The three essays composing my thesis propose to use and extend this setting to study investors behavior and investment strategies in a market where crashes and sharp windfalls are likely to occur. In the first paper, the preferences of a representative agent, relative to time varying positive and negative extremal thresholds are modelled and estimated. A new utility function that conciliates between expected utility maximization and tail-related performance measures is proposed. The model estimation shows that the representative agent preferences reveals a significant level of crash aversion and lottery-pursuit. Assuming a single risky asset economy the proposed specification is able to reproduce some of the distributional features exhibited by financial return series. The second part proposes and illustrates a preference-based asset allocation model taking into account investors crash aversion. Using the skewed t distribution, optimal allocations are characterized as a resulting tradeoff between the distribution four moments. The specification highlights the preference for odd moments and the aversion for even moments. Qualitatively, optimal portfolios are analyzed in terms of firm characteristics and in a setting that reflects real-time asset allocation, a systematic over-performance is obtained compared to the aggregate stock market. Finally, in my third article, dynamic option-based investment strategies are derived and illustrated for investors presenting downside loss aversion. The problem is solved in closed form when the stock market exhibits stochastic volatility and jumps. The specification of downside loss averse utility functions allows corresponding terminal wealth profiles to be expressed as options on the stochastic discount factor contingent on the loss aversion level. Therefore dynamic strategies reduce to the replicating portfolio using exchange traded and well selected options, and the risky stock.
Resumo:
Mediante buceo y exploraciones al intermareal y submareal de la región Áncash (9°58’08’’S 78°38’34’’W y 10°34’06’’S 77°54’30’’W) entre el 2003 y el 2010 se colectaron, identificaron y fotografiaron 135 especies de invertebrados que corresponden a los grupos Cnidaria (6 especies), Annelida (11 especies), Brachiopoda (1 especie), Mollusca (70 especies), Arthropoda (34 especies), Echinodermata (10 especies), Sipunculida (1 especie) y Chordata (2 especies). Del total de especies, se considera que Sipunculus (Austrosiphon) mundanus representa un nuevo registro para el Perú, que cuatro ampliaron su distribución hacia el norte y nueve hacia el sur. Cada especie se ubica taxonómicamente y se proporciona información de nombre común, diagnosis, hábitat, profundidad, aspectos bioecológicos, distribución geográfica, localidades en la región Áncash, otras localidades en el Perú, comentarios y referencias.