841 resultados para Stillar, Glenn F.: Analyzing everyday texts
Resumo:
This study adopts the framework of Systemic Functional Grammar (SFG; Halliday, 1994/2000; Halliday & Matthiessen, 2004) to investigate thematic features in messages sent to an electronic bulletin board system (BBS) in mainland China. As a concept derived from the Prague School, theme in SFG has been identified as “the point of departure of the message; it is that which locates and orients the clause within its context” (Halliday & Matthiessen, 2004, p. 64). Thematic features in the Chinese data are found to relate to the situational features of the BBS, the analysis of which is based on the frameworks of Biber (1988) and Herring (2007). The relevant situational features are further generalized into the three components of context: field, tenor, and mode (Halliday & Hasan, 1985) in order to examine the relation between thematic features and situational features. The study’s findings show that thematic features are more closely related to the field (nature of the activity) than to the mode, contrary to Halliday’s (1978/2001) claim that theme, as a realization of the textual meaning, is determined by the mode (medium). In concluding, this discrepancy is explored.
Resumo:
A great share of literature on social exclusion has been based mainly on the analysis of official survey data. Whereas these efforts have provided insights into the characteristics and conditions of those people living at the margins of mainstream social relations, they have however failed to encompass those who live beyond these very margins. Meanwhile, research on these hidden subpopulations, such as homeless and other vulnerable groups, remains generally less abundant and is significantly detached from the theoretical core of the debate on social exclusion. The concern about these shortcomings lies at the heart of our research. We seek to bring some light to the area by using data made available by an organization that provides services to people experiencing homelessness in Barcelona (Spain). The data sample contains clients in early stages of exclusion and others in chronic situations. Thus, we attempt to identify some of the variables that operate in preventing the "chronification" of those individuals in situation of social exclusion. Our findings suggest that certain variables such as educational level, income and housing type, which are considered to be central predictors in the analysis of poverty, behave differently when analyzing differences between stages of social exclusion. Although these results cannot be extrapolated to the whole Spanish or European reality, they could provide useful insight for future investigations on this topic.
Resumo:
Mixed Reality (MR) aims to link virtual entities with the real world and has many applications such as military and medical domains [JBL+00, NFB07]. In many MR systems and more precisely in augmented scenes, one needs the application to render the virtual part accurately at the right time. To achieve this, such systems acquire data related to the real world from a set of sensors before rendering virtual entities. A suitable system architecture should minimize the delays to keep the overall system delay (also called end-to-end latency) within the requirements for real-time performance. In this context, we propose a compositional modeling framework for MR software architectures in order to specify, simulate and validate formally the time constraints of such systems. Our approach is first based on a functional decomposition of such systems into generic components. The obtained elements as well as their typical interactions give rise to generic representations in terms of timed automata. A whole system is then obtained as a composition of such defined components. To write specifications, a textual language named MIRELA (MIxed REality LAnguage) is proposed along with the corresponding compilation tools. The generated output contains timed automata in UPPAAL format for simulation and verification of time constraints. These automata may also be used to generate source code skeletons for an implementation on a MR platform. The approach is illustrated first on a small example. A realistic case study is also developed. It is modeled by several timed automata synchronizing through channels and including a large number of time constraints. Both systems have been simulated in UPPAAL and checked against the required behavioral properties.
Resumo:
Methodological approaches in which data on nonverbal behavior are collected usually involve interpretive methods in which raters must identify a set of defined categories of behavior. However, present knowledge about the qualitative aspects of head movement behavior calls for recording detailed transcriptions of behavior. These records are a prerequisite for investigating the function and meaning of head movement patterns. A method for directly collecting data on head movement behavior is introduced. Using small ultrasonic transducers, which are attached to various parts of an index person's body (head and shoulders), a microcomputer defines receiver-transducers distances. Three-dimensional positions are calculated by triangulation. These data are used for further calculations concerning the angular orientation of the head and the direction, size, and speed of head movements (in rotational, lateral, and sagittal dimensions).
Resumo:
Stemmatology, or the reconstruction of the transmission history of texts, is a field that stands particularly to gain from digital methods. Many scholars already take stemmatic approaches that rely heavily on computational analysis of the collated text (e.g. Robinson and O’Hara 1996; Salemans 2000; Heikkilä 2005; Windram et al. 2008 among many others). Although there is great value in computationally assisted stemmatology, providing as it does a reproducible result and allowing access to the relevant methodological process in related fields such as evolutionary biology, computational stemmatics is not without its critics. The current state-of-the-art effectively forces scholars to choose between a preconceived judgment of the significance of textual differences (the Lachmannian or neo-Lachmannian approach, and the weighted phylogenetic approach) or to make no judgment at all (the unweighted phylogenetic approach). Some basis for judgment of the significance of variation is sorely needed for medieval text criticism in particular. By this, we mean that there is a need for a statistical empirical profile of the text-genealogical significance of the different sorts of variation in different sorts of medieval texts. The rules that apply to copies of Greek and Latin classics may not apply to copies of medieval Dutch story collections; the practices of copying authoritative texts such as the Bible will most likely have been different from the practices of copying the Lives of local saints and other commonly adapted texts. It is nevertheless imperative that we have a consistent, flexible, and analytically tractable model for capturing these phenomena of transmission. In this article, we present a computational model that captures most of the phenomena of text variation, and a method for analysis of one or more stemma hypotheses against the variation model. We apply this method to three ‘artificial traditions’ (i.e. texts copied under laboratory conditions by scholars to study the properties of text variation) and four genuine medieval traditions whose transmission history is known or deduced in varying degrees. Although our findings are necessarily limited by the small number of texts at our disposal, we demonstrate here some of the wide variety of calculations that can be made using our model. Certain of our results call sharply into question the utility of excluding ‘trivial’ variation such as orthographic and spelling changes from stemmatic analysis.
Resumo:
This paper presents a case study of analyzing a legacy PL/1 ecosystem that has grown for 40 years to support the business needs of a large banking company. In order to support the stakeholders in analyzing it we developed St1-PL/1 — a tool that parses the code for association data and computes structural metrics which it then visualizes using top-down interactive exploration. Before building the tool and after demonstrating it to stakeholders we conducted several interviews to learn about legacy ecosystem analysis requirements. We briefly introduce the tool and then present results of analysing the case study. We show that although the vision for the future is to have an ecosystem architecture in which systems are as decoupled as possible the current state of the ecosystem is still removed from this. We also present some of the lessons learned during our experience discussions with stakeholders which include their interests in automatically assessing the quality of the legacy code.