391 resultados para Textual complexity for Romanian language


Relevância:

30.00% 30.00%

Publicador:

Resumo:

The design and development of process-aware information systems is often supported by specifying requirements as business process models. Although this approach is generally accepted as an effective strategy, it remains a fundamental challenge to adequately validate these models given the diverging skill set of domain experts and system analysts. As domain experts often do not feel confident in judging the correctness and completeness of process models that system analysts create, the validation often has to regress to a discourse using natural language. In order to support such a discourse appropriately, so-called verbalization techniques have been defined for different types of conceptual models. However, there is currently no sophisticated technique available that is capable of generating natural-looking text from process models. In this paper, we address this research gap and propose a technique for generating natural language texts from business process models. A comparison with manually created process descriptions demonstrates that the generated texts are superior in terms of completeness, structure, and linguistic complexity. An evaluation with users further demonstrates that the texts are very understandable and effectively allow the reader to infer the process model semantics. Hence, the generated texts represent a useful input for process model validation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Objective To synthesise recent research on the use of machine learning approaches to mining textual injury surveillance data. Design Systematic review. Data sources The electronic databases which were searched included PubMed, Cinahl, Medline, Google Scholar, and Proquest. The bibliography of all relevant articles was examined and associated articles were identified using a snowballing technique. Selection criteria For inclusion, articles were required to meet the following criteria: (a) used a health-related database, (b) focused on injury-related cases, AND used machine learning approaches to analyse textual data. Methods The papers identified through the search were screened resulting in 16 papers selected for review. Articles were reviewed to describe the databases and methodology used, the strength and limitations of different techniques, and quality assurance approaches used. Due to heterogeneity between studies meta-analysis was not performed. Results Occupational injuries were the focus of half of the machine learning studies and the most common methods described were Bayesian probability or Bayesian network based methods to either predict injury categories or extract common injury scenarios. Models were evaluated through either comparison with gold standard data or content expert evaluation or statistical measures of quality. Machine learning was found to provide high precision and accuracy when predicting a small number of categories, was valuable for visualisation of injury patterns and prediction of future outcomes. However, difficulties related to generalizability, source data quality, complexity of models and integration of content and technical knowledge were discussed. Conclusions The use of narrative text for injury surveillance has grown in popularity, complexity and quality over recent years. With advances in data mining techniques, increased capacity for analysis of large databases, and involvement of computer scientists in the injury prevention field, along with more comprehensive use and description of quality assurance methods in text mining approaches, it is likely that we will see a continued growth and advancement in knowledge of text mining in the injury field.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper, Bree Hadley discusses The Ex/centric Fixations Project, a practice-led research project which explores the inadequacy of language as a technology for expressing human experiences of difference, discrimination or marginalisation within mainstream cultures. The project asks questions about the way experience, memory and the public discourses available to express them are bound together, about the silences, failures and falsehoods embedded in any effort to convey human experience via public discourses, and about how these failures might form the basis of a performative writing method. It has, to date, focused on developing a method that expresses experience through improvised, intertextual and discontinous collages of language drawn from a variety of public discourses. Aesthetically, this method works with what Hans Theis Lehmann (Postdramatic Theatre p. 17) calls a “textual variant” of the postdramatic “in which language appears not as the speech of characters – if there are still definable characters at all – but as an autonomous theatricality” (Ibid. 18). It is defined by what Lehmann, following Julia Kristeva, calls a “polylogue”, which presents experience as a conflicted, discontinuous and circular phenomenon, akin to a musical fugue, to break away from “an order centred on one logos” (Ibid. 32). The texts function simultaneously as a series of parts, and as wholes, interwoven voices seeming almost to connect, almost to respond to each other, and almost to tell – or challenging each other’s telling – of a story. In this paper, Hadley offers a performative demonstration, together with descriptions of the way spectators respond, including the way their playful, polyvocal texture impacts on engagement, and the way the presence or non-presence of performing bodies to which the experiences depicted can be attached impacts on engagement. She suggests that the improvised, intertextual and experimental enactments of self embodied in the texts encourage spectators to engage at an emotional level, and make-meaning based primarily on memories they recall in the moment, and thus has the potential to counter the risk that people may read depictions of experiences radically different from their own in reductive, essentialised ways.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Metaphor is a multi-stage programming language extension to an imperative, object-oriented language in the style of C# or Java. This paper discusses some issues we faced when applying multi-stage language design concepts to an imperative base language and run-time environment. The issues range from dealing with pervasive references and open code to garbage collection and implementing cross-stage persistence.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Language is a unique aspect of human communication because it can be used to discuss itself in its own terms. For this reason, human societies potentially have superior capacities of co-ordination, reflexive self-correction, and innovation than other animal, physical or cybernetic systems. However, this analysis also reveals that language is interconnected with the economically and technologically mediated social sphere and hence is vulnerable to abstraction, objectification, reification, and therefore ideology – all of which are antithetical to its reflexive function, whilst paradoxically being a fundamental part of it. In particular, in capitalism, language is increasingly commodified within the social domains created and affected by ubiquitous communication technologies. The advent of the so-called ‘knowledge economy’ implicates exchangeable forms of thought (language) as the fundamental commodities of this emerging system. The historical point at which a ‘knowledge economy’ emerges, then, is the critical point at which thought itself becomes a commodified ‘thing’, and language becomes its “objective” means of exchange. However, the processes by which such commodification and objectification occurs obscures the unique social relations within which these language commodities are produced. The latest economic phase of capitalism – the knowledge economy – and the obfuscating trajectory which accompanies it, we argue, is destroying the reflexive capacity of language particularly through the process of commodification. This can be seen in that the language practices that have emerged in conjunction with digital technologies are increasingly non-reflexive and therefore less capable of self-critical, conscious change.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We generalize the classical notion of Vapnik–Chernovenkis (VC) dimension to ordinal VC-dimension, in the context of logical learning paradigms. Logical learning paradigms encompass the numerical learning paradigms commonly studied in Inductive Inference. A logical learning paradigm is defined as a set W of structures over some vocabulary, and a set D of first-order formulas that represent data. The sets of models of ϕ in W, where ϕ varies over D, generate a natural topology W over W. We show that if D is closed under boolean operators, then the notion of ordinal VC-dimension offers a perfect characterization for the problem of predicting the truth of the members of D in a member of W, with an ordinal bound on the number of mistakes. This shows that the notion of VC-dimension has a natural interpretation in Inductive Inference, when cast into a logical setting. We also study the relationships between predictive complexity, selective complexity—a variation on predictive complexity—and mind change complexity. The assumptions that D is closed under boolean operators and that W is compact often play a crucial role to establish connections between these concepts. We then consider a computable setting with effective versions of the complexity measures, and show that the equivalence between ordinal VC-dimension and predictive complexity fails. More precisely, we prove that the effective ordinal VC-dimension of a paradigm can be defined when all other effective notions of complexity are undefined. On a better note, when W is compact, all effective notions of complexity are defined, though they are not related as in the noncomputable version of the framework.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Enterprise Application Integration (EAI) is a challenging area that is attracting growing attention from the software industry and the research community. A landscape of languages and techniques for EAI has emerged and is continuously being enriched with new proposals from different software vendors and coalitions. However, little or no effort has been dedicated to systematically evaluate and compare these languages and techniques. The work reported in this paper is a first step in this direction. It presents an in-depth analysis of a language, namely the Business Modeling Language, specifically developed for EAI. The framework used for this analysis is based on a number of workflow and communication patterns. This framework provides a basis for evaluating the advantages and drawbacks of EAI languages with respect to recurrent problems and situations.