38 resultados para briefing documents
Resumo:
This paper aims to demonstrate how a derived approach to case file analysis, influenced by the work of Michel Foucault and Dorothy E.Smith, can offer innovative means by which to study the relations between discourse and practices in child welfare. The article explores text-based forms of organization in histories of child protection in Finland and in Northern Ireland. It is focused on case file records in different organizational child protection contexts in two jurisdictions. Building on a previous article (Author 1 & 2: 2011), we attempt to demonstrate the potential of how the relations between practices and discourses –a majorly important theme for understanding child welfare social work – can be effectively analysed using a combination of two approaches This article is based on three different empirical studies from our two jurisdictions Northern Ireland (UK) and Finland; one study used Foucault; the other Smith and the third study sought to combine the methods. This article seeks to report on ongoing work in developing, for child welfare studies, ‘a history that speaks back’ as we have described it.
Resumo:
In most previous research on distributional semantics, Vector Space Models (VSMs) of words are built either from topical information (e.g., documents in which a word is present), or from syntactic/semantic types of words (e.g., dependency parse links of a word in sentences), but not both. In this paper, we explore the utility of combining these two representations to build VSM for the task of semantic composition of adjective-noun phrases. Through extensive experiments on benchmark datasets, we find that even though a type-based VSM is effective for semantic composition, it is often outperformed by a VSM built using a combination of topic- and type-based statistics. We also introduce a new evaluation task wherein we predict the composed vector representation of a phrase from the brain activity of a human subject reading that phrase. We exploit a large syntactically parsed corpus of 16 billion tokens to build our VSMs, with vectors for both phrases and words, and make them publicly available.