902 resultados para Generative philology
Resumo:
This chapter focuses on learning and assessment as social and cultural practices situated within national and international policy contexts of educational change. Classroom assessment was researched using a conceptualization of knowing in action, or the ‘generative dance’. Fine-grained analyses of interactivity between students, and between teacher and student/s, and their patterns of participation in assessment and learning were conducted. The findings offer original insights into how learners draw on explicit and tacit forms of knowing in order to successfully participate in learning. Assessment is re-imagined as a dynamic space in which teachers learn about their students as they learn with their students, and where all students can be empowered to find success.
Resumo:
In this practice-led research project I work to show how a re-reading and a particular form of listening to the sound-riddled nature of Gertrude Stein's work, Two: Gertrude Stein and her Brother, presents us with a contemporary theory of sound in language. This theory, though in its infancy, is a particular enjambment of sounded language that presents itself as an event, engaged with meaning, with its own inherent voice. It displays a propensity through engagement with the 'other' to erupt into love. In this thesis these qualities are reverberated further through the work of Seth Kim-Cohen's notion of the non-cochlear, Simon Jarvis's notion of musical thinking, Jean-Jacques Lecercle's notion of délire or nonsense, Luce Irigaray's notion of jouissant love and the Bracha Ettinger's notion of the generative matrixial border space. This reading then is simultaneously paired with my own work of scoring and creating a digital opera from Stein's work, thereby testing and performing Stein's theory. In this I show how a re-reading and relistening to Stein's work can be significant to feminist ethical language frames, contemporary philosophy, sonic art theory and digital language frames. Further significance of this study is that when the reverberation of Stein's engagements with language through sound can be listened to, a pattern emerges, one that encouragingly problematizes subjectivity and interweaves genres/methods and means, creating a new frame for sound in language, one with its own voice that I call soundage.
Resumo:
Background: Temporal analysis of gene expression data has been limited to identifying genes whose expression varies with time and/or correlation between genes that have similar temporal profiles. Often, the methods do not consider the underlying network constraints that connect the genes. It is becoming increasingly evident that interactions change substantially with time. Thus far, there is no systematic method to relate the temporal changes in gene expression to the dynamics of interactions between them. Information on interaction dynamics would open up possibilities for discovering new mechanisms of regulation by providing valuable insight into identifying time-sensitive interactions as well as permit studies on the effect of a genetic perturbation. Results: We present NETGEM, a tractable model rooted in Markov dynamics, for analyzing the dynamics of the interactions between proteins based on the dynamics of the expression changes of the genes that encode them. The model treats the interaction strengths as random variables which are modulated by suitable priors. This approach is necessitated by the extremely small sample size of the datasets, relative to the number of interactions. The model is amenable to a linear time algorithm for efficient inference. Using temporal gene expression data, NETGEM was successful in identifying (i) temporal interactions and determining their strength, (ii) functional categories of the actively interacting partners and (iii) dynamics of interactions in perturbed networks. Conclusions: NETGEM represents an optimal trade-off between model complexity and data requirement. It was able to deduce actively interacting genes and functional categories from temporal gene expression data. It permits inference by incorporating the information available in perturbed networks. Given that the inputs to NETGEM are only the network and the temporal variation of the nodes, this algorithm promises to have widespread applications, beyond biological systems. The source code for NETGEM is available from https://github.com/vjethava/NETGEM
Resumo:
Maximum entropy approach to classification is very well studied in applied statistics and machine learning and almost all the methods that exists in literature are discriminative in nature. In this paper, we introduce a maximum entropy classification method with feature selection for large dimensional data such as text datasets that is generative in nature. To tackle the curse of dimensionality of large data sets, we employ conditional independence assumption (Naive Bayes) and we perform feature selection simultaneously, by enforcing a `maximum discrimination' between estimated class conditional densities. For two class problems, in the proposed method, we use Jeffreys (J) divergence to discriminate the class conditional densities. To extend our method to the multi-class case, we propose a completely new approach by considering a multi-distribution divergence: we replace Jeffreys divergence by Jensen-Shannon (JS) divergence to discriminate conditional densities of multiple classes. In order to reduce computational complexity, we employ a modified Jensen-Shannon divergence (JS(GM)), based on AM-GM inequality. We show that the resulting divergence is a natural generalization of Jeffreys divergence to a multiple distributions case. As far as the theoretical justifications are concerned we show that when one intends to select the best features in a generative maximum entropy approach, maximum discrimination using J-divergence emerges naturally in binary classification. Performance and comparative study of the proposed algorithms have been demonstrated on large dimensional text and gene expression datasets that show our methods scale up very well with large dimensional datasets.